WorldWideScience

Sample records for hilbert-schmidt probability distributions

  1. Hilbert-Schmidt expansion for the nucleon-deuteron scattering amplitude

    International Nuclear Information System (INIS)

    Moeller, K.; Narodetskii, I.M.

    1983-01-01

    The Hilbert-Schmidt method is used to sum the divergent iterative series for the partial amplitudes of nucleon-deuteron scattering in the energy region above the deuteron breakup threshold. It is observed that the Hilbert-Schmidt series for the partial amplitudes themselves diverges, which is due to the closeness of the logarithmic singularities. But if the first iterations in the series for multiple scattering are subtracted from the amplitude, the Hilbert-Schmidt series for the remainder converges rapidly. The final answer obtained in the present paper is in excellent agreement with the results obtained in exact calculations

  2. Hilbert-Schmidt method for nucleon-deuteron scattering

    International Nuclear Information System (INIS)

    Moeller, K.; Narodetskij, I.M.

    1983-01-01

    The Hilbert-Schmidt technique is used for computing the divergent multiple-scattering series for scattering of nucleons by deuterons at energies above the deuteron breakup. It is found that for each partial amplitude a series of s-channel resonances diverges because of the logarithmic singularities which reflect the t-channel singularities of the total amplitude. However, the convergence of the Hilbert-Schmidt series may be improved by iterating the Faddeev equations thereby extracting the most strong logarithmic singularities. It is shown that the series for the amplitudes with first two iterations subtracted converges rapidly. Final results are in excellent agreement with exact results obtained by a direct matrix technique

  3. The Hilbert-Schmidt method for nucleon-deuteron scattering

    International Nuclear Information System (INIS)

    Moeller, K.; Narodetskii, I.M.

    1984-01-01

    The Hilbert-Schmidt technique is used for computing the divergent multiple-scattering series for scattering of nucleons by deuterons at energies above the deuteron breakup. We have found that for each partial amplitude a series of s-channel resonances diverges because of the logarithmic singularities which reflect the t-channel singularities of the total amplitude. However, the convergence of the Hilbert-Schmidt series may be improved by iterating the Faddeev equations thereby extracting the most strong logarithmic singularities. We show that the series for the amplitudes with the first two iteration subtracted converges rapidly. Our final results are in excellent agreement with exact results obtained by a direct matrix technique. (orig.)

  4. nth roots with Hilbert-Schmidt defect operator of normal contractions

    International Nuclear Information System (INIS)

    Duggal, B.P.

    1992-08-01

    Let T be a normal contraction (on a complex separable Hilbert space H into itself) with an nth root A such that the defect operator D A =(1-A*A) 1/2 is of the Hilbert-Schmidt class C 2 . Then either A is normal or A is similar to a normal contraction. In the case in which T is hyponormal, A n =T and D A is an element of C 2 , A is a ''coupling'' of a contraction similar to a normal contraction and a contraction which is the quasi-affine transform of a unilateral shift. These results are applied to prove a (Putnam-Fuglede type) commutatively theorem for operator valued roots of commutative analytic functions and hyponormal contractions T which have an nth root with Hilbert-Schmidt defect operator. 23 refs

  5. Advances in delimiting the Hilbert-Schmidt separability probability of real two-qubit systems

    International Nuclear Information System (INIS)

    Slater, Paul B

    2010-01-01

    We seek to derive the probability-expressed in terms of the Hilbert-Schmidt (Euclidean or flat) metric-that a generic (nine-dimensional) real two-qubit system is separable, by implementing the well-known Peres-Horodecki test on the partial transposes (PTs) of the associated 4 x 4 density matrices (ρ). But the full implementation of the test-requiring that the determinant of the PT be nonnegative for separability to hold-appears to be, at least presently, computationally intractable. So, we have previously implemented-using the auxiliary concept of a diagonal-entry-parameterized separability function (DESF)-the weaker implied test of nonnegativity of the six 2 x 2 principal minors of the PT. This yielded an exact upper bound on the separability probability of 1024/135π 2 ∼0.76854. Here, we piece together (reflection-symmetric) results obtained by requiring that each of the four 3 x 3 principal minors of the PT, in turn, be nonnegative, giving an improved/reduced upper bound of 22/35∼0.628571. Then, we conclude that a still further improved upper bound of 1129/2100∼0.537619 can be found by similarly piecing together the (reflection-symmetric) results of enforcing the simultaneous nonnegativity of certain pairs of the four 3 x 3 principal minors. Numerical simulations-as opposed to exact symbolic calculations-indicate, on the other hand, that the true probability is certainly less than 1/2 . Our analyses lead us to suggest a possible form for the true DESF, yielding a separability probability of 29/64∼0.453125, while the absolute separability probability of (6928-2205π)/(2 9/2 )∼0.0348338 provides the best exact lower bound established so far. In deriving our improved upper bounds, we rely repeatedly upon the use of certain integrals over cubes that arise. Finally, we apply an independence assumption to a pair of DESFs that comes close to reproducing our numerical estimate of the true separability function.

  6. Hilbert-Schmidt quantum coherence in multi-qudit systems

    Science.gov (United States)

    Maziero, Jonas

    2017-11-01

    Using Bloch's parametrization for qudits ( d-level quantum systems), we write the Hilbert-Schmidt distance (HSD) between two generic n-qudit states as an Euclidean distance between two vectors of observables mean values in R^{Π_{s=1}nds2-1}, where ds is the dimension for qudit s. Then, applying the generalized Gell-Mann's matrices to generate SU(ds), we use that result to obtain the Hilbert-Schmidt quantum coherence (HSC) of n-qudit systems. As examples, we consider in detail one-qubit, one-qutrit, two-qubit, and two copies of one-qubit states. In this last case, the possibility for controlling local and non-local coherences by tuning local populations is studied, and the contrasting behaviors of HSC, l1-norm coherence, and relative entropy of coherence in this regard are noticed. We also investigate the decoherent dynamics of these coherence functions under the action of qutrit dephasing and dissipation channels. At last, we analyze the non-monotonicity of HSD under tensor products and report the first instance of a consequence (for coherence quantification) of this kind of property of a quantum distance measure.

  7. Master Lovas-Andai and equivalent formulas verifying the 8/33 two-qubit Hilbert-Schmidt separability probability and companion rational-valued conjectures

    Science.gov (United States)

    Slater, Paul B.

    2018-04-01

    We begin by investigating relationships between two forms of Hilbert-Schmidt two-rebit and two-qubit "separability functions"—those recently advanced by Lovas and Andai (J Phys A Math Theor 50(29):295303, 2017), and those earlier presented by Slater (J Phys A 40(47):14279, 2007). In the Lovas-Andai framework, the independent variable ɛ \\in [0,1] is the ratio σ (V) of the singular values of the 2 × 2 matrix V=D_2^{1/2} D_1^{-1/2} formed from the two 2 × 2 diagonal blocks (D_1, D_2) of a 4 × 4 density matrix D= ||ρ _{ij}||. In the Slater setting, the independent variable μ is the diagonal-entry ratio √{ρ _{11} ρ _ {44}/ρ _ {22 ρ _ {33}}}—with, of central importance, μ =ɛ or μ =1/ɛ when both D_1 and D_2 are themselves diagonal. Lovas and Andai established that their two-rebit "separability function" \\tilde{χ }_1 (ɛ ) (≈ ɛ ) yields the previously conjectured Hilbert-Schmidt separability probability of 29/64. We are able, in the Slater framework (using cylindrical algebraic decompositions [CAD] to enforce positivity constraints), to reproduce this result. Further, we newly find its two-qubit, two-quater[nionic]-bit and "two-octo[nionic]-bit" counterparts, \\tilde{χ _2}(ɛ ) =1/3 ɛ ^2 ( 4-ɛ ^2) , \\tilde{χ _4}(ɛ ) =1/35 ɛ ^4 ( 15 ɛ ^4-64 ɛ ^2+84) and \\tilde{χ _8} (ɛ )= 1/1287ɛ ^8 ( 1155 ɛ ^8-7680 ɛ ^6+20160 ɛ ^4-25088 ɛ ^2+12740) . These immediately lead to predictions of Hilbert-Schmidt separability/PPT-probabilities of 8/33, 26/323 and 44482/4091349, in full agreement with those of the "concise formula" (Slater in J Phys A 46:445302, 2013), and, additionally, of a "specialized induced measure" formula. Then, we find a Lovas-Andai "master formula," \\tilde{χ _d}(ɛ )= ɛ ^d Γ (d+1)^3 _3\\tilde{F}_2( -{d/2,d/2,d;d/2+1,3 d/2+1;ɛ ^2) }/{Γ ( d/2+1) ^2}, encompassing both even and odd values of d. Remarkably, we are able to obtain the \\tilde{χ _d}(ɛ ) formulas, d=1,2,4, applicable to full (9-, 15-, 27-) dimensional sets of

  8. Pairs of dual Gabor frames generated by functions of Hilbert-Schmidt type

    DEFF Research Database (Denmark)

    Christiansen, Lasse Hjuler

    2015-01-01

    where each member may be written as a linear combination of integer translates of any B-spline. We introduce functions of Hilbert-Schmidt type along with a new method which allows us to associate to certain such functions finite families of recursively defined dual windows of arbitrary smoothness...

  9. Four-nucleon problem in terms of scattering of Hilbert-Schmidt resonances

    International Nuclear Information System (INIS)

    Narodetsky, I.M.

    1974-01-01

    The four-body integral equations are written in terms of the scattering amplitudes for the Hilbert-Schmidt resonances corresponding to the 3*1 and 2*2 subsystems. As a result, the four-body problem is reduced to the many channel two-body problem. A simple diagram technique is introduced which is the generalization of the usual time-ordered nonrelativistic one. The connection between the amplitudes of the two-body reactions and the scattering amplitudes for the resonances is obtained

  10. Qubit-qutrit separability-probability ratios

    International Nuclear Information System (INIS)

    Slater, Paul B.

    2005-01-01

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N 2 -1)-dimensional volume and (N 2 -2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10 9 well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase. Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases

  11. Two-shot fringe pattern phase-amplitude demodulation using Gram-Schmidt orthonormalization with Hilbert-Huang pre-filtering.

    Science.gov (United States)

    Trusiak, Maciej; Patorski, Krzysztof

    2015-02-23

    Gram-Schmidt orthonormalization is a very fast and efficient method for the fringe pattern phase demodulation. It requires only two arbitrarily phase-shifted frames. Images are treated as vectors and upon orthogonal projection of one fringe vector onto another the quadrature fringe pattern pair is obtained. Orthonormalization process is very susceptible, however, to noise, uneven background and amplitude modulation fluctuations. The Hilbert-Huang transform based preprocessing is proposed to enhance fringe pattern phase demodulation by filtering out the spurious noise and background illumination and performing fringe normalization. The Gram-Schmidt orthonormalization process error analysis is provided and its filtering-expanded capabilities are corroborated analyzing DSPI fringes and performing amplitude demodulation of Bessel fringes. Synthetic and experimental fringe pattern analyses presented to validate the proposed technique show that it compares favorably with other pre-filtering schemes, i.e., Gaussian filtering and continuous wavelet transform.

  12. Quantum theory in complex Hilbert space

    International Nuclear Information System (INIS)

    Sharma, C.S.

    1988-01-01

    The theory of complexification of a real Hilbert space as developed by the author is scrutinized with the aim of explaining why quantum theory should be done in a complex Hilbert space in preference to real Hilbert space. It is suggested that, in order to describe periodic motions in stationary states of a quantum system, the mathematical object modelling a state of a system should have enough points in it to be able to describe explicit time dependence of a periodic motion without affecting the probability distributions of observables. Heuristic evidence for such an assumption comes from Dirac's theory of interaction between radiation and matter. If the assumption is adopted as a requirement on the mathematical model for a quantum system, then a real Hilbert space is ruled out in favour of a complex Hilbert space for a possible model for such a system

  13. Two-photon spectral amplitude of entangled states resolved in separable Schmidt modes

    International Nuclear Information System (INIS)

    Avella, A; Brida, G; Gramegna, M; Shurupov, A; Genovese, M; Chekhova, M

    2015-01-01

    The ability to access high dimensionality in Hilbert spaces represents a demanding key-stone for state-of-the-art quantum information. The manipulation of entangled states in continuous variables, wavevector as well frequency, represents a powerful resource in this sense. The number of dimensions of the Hilbert space that can be used in practical information protocols can be determined by the number of Schmidt modes that it is possible to address one by one. In the case of wavevector variables, the Schmidt modes can be losslessly selected using single-mode fibre and a spatial light modulator, but no similar procedure exists for the frequency space. The aim of this work is to present a technique to engineer the spectral properties of biphoton light, emitted via ultrafast spontaneous parametric down conversion, in such a way that the two-photon spectral amplitude (TPSA) contains several non-overlapping Schmidt modes, each of which can be filtered losslessly in frequency variables. Such TPSA manipulation is operated by a fine balancing of parameters like the pump frequency, the shaping of pump pulse spectrum, the dispersion dependence of spontaneous parametric down-conversion crystals as well as their length. Measurements have been performed exploiting the group velocity dispersion induced by the passage of optical fields through dispersive media, operating a frequency-to-time two-dimensional Fourier transform of the TPSA. Exploiting this kind of measurement we experimentally demonstrate the ability to control the Schmidt modes structure in TPSA through the pump spectrum manipulation. (paper)

  14. On the Generation of Random Ensembles of Qubits and Qutrits Computing Separability Probabilities for Fixed Rank States

    Directory of Open Access Journals (Sweden)

    Khvedelidze Arsen

    2018-01-01

    Full Text Available The generation of random mixed states is discussed, aiming for the computation of probabilistic characteristics of composite finite dimensional quantum systems. In particular, we consider the generation of random Hilbert-Schmidt and Bures ensembles of qubit and qutrit pairs and compute the corresponding probabilities to find a separable state among the states of a fixed rank.

  15. Clustering in Hilbert simplex geometry

    KAUST Repository

    Nielsen, Frank

    2017-04-03

    Clustering categorical distributions in the probability simplex is a fundamental primitive often met in applications dealing with histograms or mixtures of multinomials. Traditionally, the differential-geometric structure of the probability simplex has been used either by (i) setting the Riemannian metric tensor to the Fisher information matrix of the categorical distributions, or (ii) defining the information-geometric structure induced by a smooth dissimilarity measure, called a divergence. In this paper, we introduce a novel computationally-friendly non-Riemannian framework for modeling the probability simplex: Hilbert simplex geometry. We discuss the pros and cons of those three statistical modelings, and compare them experimentally for clustering tasks.

  16. Hilbert-Schmidt and Sobol sensitivity indices for static and time series Wnt signaling measurements in colorectal cancer - part A.

    Science.gov (United States)

    Sinha, Shriprakash

    2017-12-04

    Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues. This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices. The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within

  17. Some means inequalities for positive operators in Hilbert spaces

    Directory of Open Access Journals (Sweden)

    Jin Liang

    2017-01-01

    Full Text Available Abstract In this paper, we obtain two refinements of the ordering relations among Heinz means with different parameters via the Taylor series of some hyperbolic functions and by the way, we derive new generalizations of Heinz operator inequalities. Moreover, we establish a matrix version of Heinz inequality for the Hilbert-Schmidt norm. Finally, we introduce a weighted multivariate geometric mean and show that the weighted multivariate operator geometric mean possess several attractive properties and means inequalities.

  18. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    Science.gov (United States)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  19. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    International Nuclear Information System (INIS)

    Pato, Mauricio P; Oshanin, Gleb

    2013-01-01

    We study the probability distribution function P (β) n (w) of the Schmidt-like random variable w = x 2 1 /(∑ j=1 n x 2 j /n), where x j , (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P (β) n (w) converges to the Marčenko–Pastur form, i.e. is defined as P n (β) (w)∼√((4 - w)/w) for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P (β=2) n (w) which are valid for arbitrary n and analyse their behaviour. (paper)

  20. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  1. Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers

    Science.gov (United States)

    Xiao, X.; Hassan, H. A.; Baurle, R. A.

    2006-01-01

    A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.

  2. Aspects of a representation of quantum theory in terms of classical probability theory by means of integration in Hilbert space

    International Nuclear Information System (INIS)

    Bach, A.

    1981-01-01

    A representation of quantum mechanics in terms of classical probability theory by means of integration in Hilbert space is discussed. This formal hidden-variables representation is analysed in the context of impossibility proofs concerning hidden-variables theories. The structural analogy of this formulation of quantum theory with classical statistical mechanics is used to elucidate the difference between classical mechanics and quantum mechanics. (author)

  3. Weibull Distribution for Estimating the Parameters and Application of Hilbert Transform in case of a Low Wind Speed at Kolaghat

    Directory of Open Access Journals (Sweden)

    P Bhattacharya

    2016-09-01

    Full Text Available The wind resource varies with of the day and the season of the year and even some extent from year to year. Wind energy has inherent variances and hence it has been expressed by distribution functions. In this paper, we present some methods for estimating Weibull parameters in case of a low wind speed characterization, namely, shape parameter (k, scale parameter (c and characterize the discrete wind data sample by the discrete Hilbert transform. We know that the Weibull distribution is an important distribution especially for reliability and maintainability analysis. The suitable values for both shape parameter and scale parameters of Weibull distribution are important for selecting locations of installing wind turbine generators. The scale parameter of Weibull distribution also important to determine whether a wind farm is good or not. Thereafter the use of discrete Hilbert transform (DHT for wind speed characterization provides a new era of using DHT besides its application in digital signal processing. Basically in this paper, discrete Hilbert transform has been applied to characterize the wind sample data measured on College of Engineering and Management, Kolaghat, East Midnapore, India in January 2011.

  4. Formulas for Rational-Valued Separability Probabilities of Random Induced Generalized Two-Qubit States

    Directory of Open Access Journals (Sweden)

    Paul B. Slater

    2015-01-01

    Full Text Available Previously, a formula, incorporating a 5F4 hypergeometric function, for the Hilbert-Schmidt-averaged determinantal moments ρPTnρk/ρk of 4×4 density-matrices (ρ and their partial transposes (|ρPT|, was applied with k=0 to the generalized two-qubit separability probability question. The formula can, furthermore, be viewed, as we note here, as an averaging over “induced measures in the space of mixed quantum states.” The associated induced-measure separability probabilities (k=1,2,… are found—via a high-precision density approximation procedure—to assume interesting, relatively simple rational values in the two-re[al]bit (α=1/2, (standard two-qubit (α=1, and two-quater[nionic]bit (α=2 cases. We deduce rather simple companion (rebit, qubit, quaterbit, … formulas that successfully reproduce the rational values assumed for general  k. These formulas are observed to share certain features, possibly allowing them to be incorporated into a single master formula.

  5. Frames and bases in tensor products of Hilbert spaces and Hilbert C ...

    Indian Academy of Sciences (India)

    In this article, we study tensor product of Hilbert *-modules and Hilbert spaces. We show that if is a Hilbert -module and is a Hilbert -module, then tensor product of frames (orthonormal bases) for and produce frames (orthonormal bases) for Hilbert A ⊗ B -module E ⊗ F , and we get more results. For Hilbert ...

  6. Hilbert space, Poincare dodecahedron and golden mean transfiniteness

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2007-01-01

    A rather direct connection between Hilbert space and E-infinity theory is established via an irrational-transfinite golden mean topological probability. Subsequently the ramifications for Kleinian modular spaces and the cosmological Poincare Dodecahedron proposals are considered

  7. Limit distribution function of inhomogeneities in regions with random boundary in the Hilbert space

    International Nuclear Information System (INIS)

    Rasulova, M.Yu.; Tashpulatov, S.M.

    2004-10-01

    The interaction of charged particle systems with a membrane consisting of nonhomogeneities which are randomly distributed by the same law in the vicinity of appropriate sites of a planax crystal lattice is studied. A system of equations for the self-consistent potential U 1 (x,ξ 0 ,..., ξ N ,...) and the density of induced charges σ(x,ξ 0 ,...,ξ N ,...) is solved on Hilbert space. (author)

  8. Frames in super Hilbert modules

    Directory of Open Access Journals (Sweden)

    Mehdi Rashidi-Kouchi

    2018-01-01

    Full Text Available In this paper, we define super Hilbert module and investigate frames in this space. Super Hilbert modules are  generalization of super Hilbert spaces in Hilbert C*-module setting. Also, we define frames in a super Hilbert module and characterize them by using of the concept of g-frames in a Hilbert C*-module. Finally, disjoint frames in Hilbert C*-modules are introduced and investigated.

  9. Schmidt number for quantum operations

    International Nuclear Information System (INIS)

    Huang Siendong

    2006-01-01

    To understand how entangled states behave under local quantum operations is an open problem in quantum-information theory. The Jamiolkowski isomorphism provides a natural way to study this problem in terms of quantum states. We introduce the Schmidt number for quantum operations by this duality and clarify how the Schmidt number of a quantum state changes under a local quantum operation. Some characterizations of quantum operations with Schmidt number k are also provided

  10. Hilbert-type inequalities for Hilbert space operators | Krnic ...

    African Journals Online (AJOL)

    In this paper we establish a general form of the Hilbert inequality for positive invertible operators on a Hilbert space. Special emphasis is given to such inequalities with homogeneous kernels. In some general cases the best possible constant factors are also derived. Finally, we obtain the improvement of previously deduced ...

  11. Numerical Gram-Schmidt orthonormalization

    International Nuclear Information System (INIS)

    Werneth, Charles M; Dhar, Mallika; Maung, Khin Maung; Sirola, Christopher; Norbury, John W

    2010-01-01

    A numerical Gram-Schmidt orthonormalization procedure is presented for constructing an orthonormal basis function set from a non-orthonormal set, when the number of basis functions is large. This method will provide a pedagogical illustration of the Gram-Schmidt procedure and can be presented in classes on numerical methods or computational physics.

  12. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  13. A short walk in quantum probability

    Science.gov (United States)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  14. A short walk in quantum probability.

    Science.gov (United States)

    Hudson, Robin

    2018-04-28

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  15. A discussion on the origin of quantum probabilities

    International Nuclear Information System (INIS)

    Holik, Federico; Sáenz, Manuel; Plastino, Angel

    2014-01-01

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivation of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases

  16. Almost all quantum channels are equidistant

    Science.gov (United States)

    Nechita, Ion; Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2018-05-01

    In this work, we analyze properties of generic quantum channels in the case of large system size. We use random matrix theory and free probability to show that the distance between two independent random channels converges to a constant value as the dimension of the system grows larger. As a measure of the distance we use the diamond norm. In the case of a flat Hilbert-Schmidt distribution on quantum channels, we obtain that the distance converges to 1/2 +2/π , giving also an estimate for the maximum success probability for distinguishing the channels. We also consider the problem of distinguishing two random unitary rotations.

  17. Hilbert transform and optical tomography for anisotropic edge enhancement of phase objects

    International Nuclear Information System (INIS)

    Montes-Perez, Areli; Meneses-Fabian, Cruz; Rodriguez-Zurita, Gustavo

    2011-01-01

    In phase object tomography a slice reconstruction is related to distribution of refractive index. Typically, this is obtained by applying the filtered back-projection algorithm to the set of projections (sinogram) obtained experimentally, which are sequentially obtained by calculating the phase of the wave emerging from the slice of the object at different angles. In this paper, based on optical implementation of the Hilbert-transform in a 4f Fourier operator, the Hilbert transform of the projections leaving of the object are obtained numerically. When these projection data are captured for a set of viewing angles an unconventional sinogram is eventually obtained, we have called it as an Hilbert-sinogram. The reconstruction obtained by applying the filtered back-projection algorithm is proportional to the Hilbert transform of the distribution of refractive index of the slice and the obtained image shows a typical isotropic edge enhancement. In this manuscript, the theoretical analysis and the numerical implementation of the Hilbert-transform, mathematical model of the edge enhancement reconstructed are extensively detailed.

  18. Excluding joint probabilities from quantum theory

    Science.gov (United States)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  19. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  20. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  1. Coherent states on Hilbert modules

    International Nuclear Information System (INIS)

    Ali, S Twareque; Bhattacharyya, T; Roy, S S

    2011-01-01

    We generalize the concept of coherent states, traditionally defined as special families of vectors on Hilbert spaces, to Hilbert modules. We show that Hilbert modules over C*-algebras are the natural settings for a generalization of coherent states defined on Hilbert spaces. We consider those Hilbert C*-modules which have a natural left action from another C*-algebra, say A. The coherent states are well defined in this case and they behave well with respect to the left action by A. Certain classical objects like the Cuntz algebra are related to specific examples of coherent states. Finally we show that coherent states on modules give rise to a completely positive definite kernel between two C*-algebras, in complete analogy to the Hilbert space situation. Related to this, there is a dilation result for positive operator-valued measures, in the sense of Naimark. A number of examples are worked out to illustrate the theory. Some possible physical applications are also mentioned.

  2. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  3. Hilbert's programs and beyond

    CERN Document Server

    2013-01-01

    David Hilbert was one of the great mathematicians who expounded the centrality of their subject in human thought. In this collection of essays, Wilfried Sieg frames Hilbert's foundational work, from 1890 to 1939, in a comprehensive way and integrates it with modern proof theoretic investigations. Ten essays are devoted to the analysis of classical as well as modern proof theory; three papers on the mathematical roots of Hilbert's work precede the analytical core, and three final essays exploit an open philosophical horizon for reflection on the nature of mathematics in the 21st century.

  4. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  5. Quantum theory in real Hilbert space: How the complex Hilbert space structure emerges from Poincaré symmetry

    Science.gov (United States)

    Moretti, Valter; Oppio, Marco

    As earlier conjectured by several authors and much later established by Solèr (relying on partial results by Piron, Maeda-Maeda and other authors), from the lattice theory point of view, Quantum Mechanics may be formulated in real, complex or quaternionic Hilbert spaces only. Stückelberg provided some physical, but not mathematically rigorous, reasons for ruling out the real Hilbert space formulation, assuming that any formulation should encompass a statement of Heisenberg principle. Focusing on this issue from another — in our opinion, deeper — viewpoint, we argue that there is a general fundamental reason why elementary quantum systems are not described in real Hilbert spaces. It is their basic symmetry group. In the first part of the paper, we consider an elementary relativistic system within Wigner’s approach defined as a locally-faithful irreducible strongly-continuous unitary representation of the Poincaré group in a real Hilbert space. We prove that, if the squared-mass operator is non-negative, the system admits a natural, Poincaré invariant and unique up to sign, complex structure which commutes with the whole algebra of observables generated by the representation itself. This complex structure leads to a physically equivalent reformulation of the theory in a complex Hilbert space. Within this complex formulation, differently from what happens in the real one, all selfadjoint operators represent observables in accordance with Solèr’s thesis, and the standard quantum version of Noether theorem may be formulated. In the second part of this work, we focus on the physical hypotheses adopted to define a quantum elementary relativistic system relaxing them on the one hand, and making our model physically more general on the other hand. We use a physically more accurate notion of irreducibility regarding the algebra of observables only, we describe the symmetries in terms of automorphisms of the restricted lattice of elementary propositions of the

  6. Frames and bases in tensor products of Hilbert spaces and Hilbert C ...

    Indian Academy of Sciences (India)

    [14] Heil C E and Walnut D F, Continuous and discrete wavelet transforms, SIAM Review 31. (1989) 628–666. [15] Khosravi A and Asgari M S, Frames and bases in tensor product of Hilbert spaces, Int. J. Math. 4(6) (2003) 527–538. [16] Lance E C, Hilbert C. ∗. -modules – a toolkit for operator algebraists, London Math. Soc.

  7. Weaving Hilbert space fusion frames

    OpenAIRE

    Neyshaburi, Fahimeh Arabyani; Arefijamaal, Ali Akbar

    2018-01-01

    A new notion in frame theory, so called weaving frames has been recently introduced to deal with some problems in signal processing and wireless sensor networks. Also, fusion frames are an important extension of frames, used in many areas especially for wireless sensor networks. In this paper, we survey the notion of weaving Hilbert space fusion frames. This concept can be had potential applications in wireless sensor networks which require distributed processing using different fusion frames...

  8. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  9. On the ratio probability of the smallest eigenvalues in the Laguerre unitary ensemble

    Science.gov (United States)

    Atkin, Max R.; Charlier, Christophe; Zohren, Stefan

    2018-04-01

    We study the probability distribution of the ratio between the second smallest and smallest eigenvalue in the Laguerre unitary ensemble. The probability that this ratio is greater than r  >  1 is expressed in terms of an Hankel determinant with a perturbed Laguerre weight. The limiting probability distribution for the ratio as is found as an integral over containing two functions q 1(x) and q 2(x). These functions satisfy a system of two coupled Painlevé V equations, which are derived from a Lax pair of a Riemann-Hilbert problem. We compute asymptotic behaviours of these functions as and , as well as large n asymptotics for the associated Hankel determinants in several regimes of r and x.

  10. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  11. Computing Instantaneous Frequency by normalizing Hilbert Transform

    Science.gov (United States)

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  12. Winning Attitude & Dedication to Physical Therapy Keep Sam Schmidt on Track

    Science.gov (United States)

    Bosley, Nikki Prevenslik

    2006-01-01

    This article relates how Sam Schmidt returned to living a productive life after an accident left him with spinal cord injury. Schmidt was a former Indy Racing League driver who founded Sam Schmidt Motorsports after his accident in 2000. Schmidt's car hit the wall as he exited turn two during a practice session at Walt Disney World Speedway in…

  13. Some open problems in noncommutative probability

    International Nuclear Information System (INIS)

    Kruszynski, P.

    1981-01-01

    A generalization of probability measures to non-Boolean structures is discussed. The starting point of the theory is the Gleason theorem about the form of measures on closed subspaces of a Hilbert space. The problems are formulated in terms of probability on lattices of projections in arbitrary von Neumann algebras. (Auth.)

  14. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  15. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  16. Bernhard Schmidt - realiteet müütide vastu / Ülo Tonts

    Index Scriptorium Estoniae

    Tonts, Ülo, 1931-2016

    1996-01-01

    Raamatust Optical illusions. The life story of Bernhard Schmidt the great stellar optician of the twentieth century by Erik Schmidt. Estonian Academy Publishers 1995. B. Schmidt - eestlasest optik, kellest kirjutas J. Kross romaanis "Vastutuulelaev"

  17. Exponential Hilbert series of equivariant embeddings

    OpenAIRE

    Johnson, Wayne A.

    2018-01-01

    In this article, we study properties of the exponential Hilbert series of a $G$-equivariant projective variety, where $G$ is a semisimple, simply-connected complex linear algebraic group. We prove a relationship between the exponential Hilbert series and the degree and dimension of the variety. We then prove a combinatorial identity for the coefficients of the polynomial representing the exponential Hilbert series. This formula is used in examples to prove further combinatorial identities inv...

  18. Surface colour photometry of galaxies with Schmidt telescopes.

    Science.gov (United States)

    Wray, J. D.

    1972-01-01

    A method is described which owes its practicality to the capability of Schmidt telescopes to record a number of galaxy images on a single plate and to the existence of high speed computer controlled area-scanning precision microdensitometers such as the Photometric Data Systems model 1010. The method of analysis results in quantitative color-index information which is displayed in a manner that allows any user to effectively study the morphological properties of the distribution of color-index in galaxies.

  19. The Stokes-Einstein relation at moderate Schmidt number.

    Science.gov (United States)

    Balboa Usabiaga, Florencio; Xie, Xiaoyi; Delgado-Buscalioni, Rafael; Donev, Aleksandar

    2013-12-07

    The Stokes-Einstein relation for the self-diffusion coefficient of a spherical particle suspended in an incompressible fluid is an asymptotic result in the limit of large Schmidt number, that is, when momentum diffuses much faster than the particle. When the Schmidt number is moderate, which happens in most particle methods for hydrodynamics, deviations from the Stokes-Einstein prediction are expected. We study these corrections computationally using a recently developed minimally resolved method for coupling particles to an incompressible fluctuating fluid in both two and three dimensions. We find that for moderate Schmidt numbers the diffusion coefficient is reduced relative to the Stokes-Einstein prediction by an amount inversely proportional to the Schmidt number in both two and three dimensions. We find, however, that the Einstein formula is obeyed at all Schmidt numbers, consistent with linear response theory. The mismatch arises because thermal fluctuations affect the drag coefficient for a particle due to the nonlinear nature of the fluid-particle coupling. The numerical data are in good agreement with an approximate self-consistent theory, which can be used to estimate finite-Schmidt number corrections in a variety of methods. Our results indicate that the corrections to the Stokes-Einstein formula come primarily from the fact that the particle itself diffuses together with the momentum. Our study separates effects coming from corrections to no-slip hydrodynamics from those of finite separation of time scales, allowing for a better understanding of widely observed deviations from the Stokes-Einstein prediction in particle methods such as molecular dynamics.

  20. Teleportation schemes in infinite dimensional Hilbert spaces

    International Nuclear Information System (INIS)

    Fichtner, Karl-Heinz; Freudenberg, Wolfgang; Ohya, Masanori

    2005-01-01

    The success of quantum mechanics is due to the discovery that nature is described in infinite dimension Hilbert spaces, so that it is desirable to demonstrate the quantum teleportation process in a certain infinite dimensional Hilbert space. We describe the teleportation process in an infinite dimensional Hilbert space by giving simple examples

  1. Collaborative Oceanographic Research Opportunities with Schmidt Ocean Institute

    Science.gov (United States)

    Zykov, V.

    2014-12-01

    Schmidt Ocean Institute (http://www.schmidtocean.org/) was founded by Dr. Eric Schmidt and Wendy Schmidt in 2009 to support frontier oceanographic research and exploration to expand the understanding of the world's oceans through technological advancement, intelligent, data-rich observation and analysis, and open sharing of information. Schmidt Ocean Institute operates a state-of-the-art globally capable research vessel Falkor (http://www.schmidtocean.org/story/show/47). After two years of scientific operations in the Atlantic Ocean, Gulf of Mexico, Caribbean, Eastern and Central Pacific, R/V Falkor is now preparing to support research in the Western Pacific and Eastern Indian Oceans in 2015 and 2016. As part of the long term research program development for Schmidt Ocean Institute, we aim to identify initiatives and projects that demonstrate strong alignment with our strategic interests. We focus on scientific opportunities that highlight effective use of innovative technologies to better understand the oceans, such as, for example, research enabled with remotely operated and autonomous vehicles, acoustics, in-situ sensing, telepresence, etc. Our technology-first approach to ocean science gave rise to infrastructure development initiatives, such as the development of a new full ocean depth Hybrid Remotely Operated Vehicle, new 6000m scientific Autonomous Underwater Vehicle, live HD video streaming from the ship to YouTube, shipboard high performance supercomputing, etc. We also support projects focusing on oceanographic technology research and development onboard R/V Falkor. We provide our collaborators with access to all of R/V Falkor's facilities and instrumentation in exchange for a commitment to make the resulting scientific data openly available to the international oceanographic community. This presentation aims to expand awareness about the interests and capabilities of Schmidt Ocean Institute and R/V Falkor among our scientific audiences and further

  2. Time-frequency analysis of non-stationary fusion plasma signals using an improved Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Liu, Yangqing; Tan, Yi; Xie, Huiqiao; Wang, Wenhao; Gao, Zhe

    2014-01-01

    An improved Hilbert-Huang transform method is developed to the time-frequency analysis of non-stationary signals in tokamak plasmas. Maximal overlap discrete wavelet packet transform rather than wavelet packet transform is proposed as a preprocessor to decompose a signal into various narrow-band components. Then, a correlation coefficient based selection method is utilized to eliminate the irrelevant intrinsic mode functions obtained from empirical mode decomposition of those narrow-band components. Subsequently, a time varying vector autoregressive moving average model instead of Hilbert spectral analysis is performed to compute the Hilbert spectrum, i.e., a three-dimensional time-frequency distribution of the signal. The feasibility and effectiveness of the improved Hilbert-Huang transform method is demonstrated by analyzing a non-stationary simulated signal and actual experimental signals in fusion plasmas

  3. A primer on Hilbert space theory linear spaces, topological spaces, metric spaces, normed spaces, and topological groups

    CERN Document Server

    Alabiso, Carlo

    2015-01-01

    This book is an introduction to the theory of Hilbert space, a fundamental tool for non-relativistic quantum mechanics. Linear, topological, metric, and normed spaces are all addressed in detail, in a rigorous but reader-friendly fashion. The rationale for an introduction to the theory of Hilbert space, rather than a detailed study of Hilbert space theory itself, resides in the very high mathematical difficulty of even the simplest physical case. Within an ordinary graduate course in physics there is insufficient time to cover the theory of Hilbert spaces and operators, as well as distribution theory, with sufficient mathematical rigor. Compromises must be found between full rigor and practical use of the instruments. The book is based on the author's lessons on functional analysis for graduate students in physics. It will equip the reader to approach Hilbert space and, subsequently, rigged Hilbert space, with a more practical attitude. With respect to the original lectures, the mathematical flavor in all sub...

  4. A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    International Nuclear Information System (INIS)

    Niestegge, Gerd

    2010-01-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lueders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases. (general)

  5. Effect of field-dependent mobility on the escape probability. I. Electrons photoinjected in neopentane

    International Nuclear Information System (INIS)

    Mozumder, A.; Carmichael, I.

    1978-01-01

    A general procedure is described for calculating the escape probability of an electron against neutralization in the presence of an external field after it has been ejected into a dielectric liquid from a planar surface. The present paper utilizes the field-dependent electron mobility measurement in neopentane by Bakale and Schmidt. The calculated escape probability, upon averaging over the initial distribution, is compared with the current efficiency measurement of Holroyd et al. The median thermalization legnth, inferred from this comparison, depends in general upon the assumed form of initial distribution. It is less than the value obtained when the field dependence of the mobility is ignored but greater than that applicable to the high energy irradiation case. A plausible explanation is offered

  6. Power Spectral Density and Hilbert Transform

    Science.gov (United States)

    2016-12-01

    there is 1.3 W of power. How much bandwidth does a pure sine wave require? The bandwidth of an ideal sine wave is 0 Hz. How do you represent a 1-W...the Hilbert transform. 2.3 Hilbert Transform The Hilbert transform is a math function used to convert a real function into an analytic signal...The math operation minus 2 means to move 2 steps back on the number line. For minus –2, we move 2 steps backwards from –2, which is the same as

  7. Compact Hilbert Curve Index Algorithm Based on Gray Code

    Directory of Open Access Journals (Sweden)

    CAO Xuefeng

    2016-12-01

    Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.

  8. Hidden measurements, hidden variables and the volume representation of transition probabilities

    OpenAIRE

    Oliynyk, Todd A.

    2005-01-01

    We construct, for any finite dimension $n$, a new hidden measurement model for quantum mechanics based on representing quantum transition probabilities by the volume of regions in projective Hilbert space. For $n=2$ our model is equivalent to the Aerts sphere model and serves as a generalization of it for dimensions $n \\geq 3$. We also show how to construct a hidden variables scheme based on hidden measurements and we discuss how joint distributions arise in our hidden variables scheme and th...

  9. Two-colorable graph states with maximal Schmidt measure

    International Nuclear Information System (INIS)

    Severini, Simone

    2006-01-01

    The Schmidt measure was introduced by Eisert and Briegel for quantifying the degree of entanglement of multipartite quantum systems [J. Eisert, H.-J. Briegel, Phys. Rev. A 64 (2001) 22306]. For two-colorable graph states, the Schmidt measure is related to the spectrum of the associated graph. We observe that almost all two-colorable graph states have maximal Schmidt measure and we construct specific examples. By making appeal to a result of Ehrenfeucht et al. [A. Ehrenfeucht, T. Harju, G. Rozenberg, Discrete Math. 278 (2004) 45], we point out that the graph operations called local complementation and switching form a transitive group acting on the set of all graph states of a given dimension

  10. Schmidt decomposition for non-collinear biphoton angular wave functions

    International Nuclear Information System (INIS)

    Fedorov, M V

    2015-01-01

    Schmidt modes of non-collinear biphoton angular wave functions are found analytically. The experimentally realizable procedure for their separation is described. Parameters of the Schmidt decomposition are used to evaluate the degree of the biphoton's angular entanglement. (paper)

  11. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    Science.gov (United States)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  12. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  13. Commentaries on Hilbert's Basis Theorem | Apine | Science World ...

    African Journals Online (AJOL)

    The famous basis theorem of David Hilbert is an important theorem in commutative algebra. In particular the Hilbert's basis theorem is the most important source of Noetherian rings which are by far the most important class of rings in commutative algebra. In this paper we have used Hilbert's theorem to examine their unique ...

  14. The role of the rigged Hilbert space in quantum mechanics

    International Nuclear Information System (INIS)

    Madrid, Rafael de la

    2005-01-01

    There is compelling evidence that, when a continuous spectrum is present, the natural mathematical setting for quantum mechanics is the rigged Hilbert space rather than just the Hilbert space. In particular, Dirac's braket formalism is fully implemented by the rigged Hilbert space rather than just by the Hilbert space. In this paper, we provide a pedestrian introduction to the role the rigged Hilbert space plays in quantum mechanics, by way of a simple, exactly solvable example. The procedure will be constructive and based on a recent publication. We also provide a thorough discussion on the physical significance of the rigged Hilbert space

  15. Schmidt's syndrome: a rare cause of puberty menorrhagia.

    Science.gov (United States)

    Sharma, J B; Tiwari, S; Gulati, N; Sharma, S

    1990-12-01

    Schmidt's syndrome, also known as polyglandular deficiency syndrome, is the presence of Addison's disease and hypothyrodism in a single patient. It is usually associated with other autoimmune disorders like vitiligo, diabetes mellitus, myasthenia gravis. A rare case of an 18-year-old girl having Schmidt's syndrome and vitiligo who presented with puberty menorrhagia is reported. A brief review of the literature is also given.

  16. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  17. Nested Hilbert schemes on surfaces: Virtual fundamental class

    DEFF Research Database (Denmark)

    Gholampour, Amin; Sheshmani, Artan; Yau, Shing-Tung

    We construct natural virtual fundamental classes for nested Hilbert schemes on a nonsingular projective surface S. This allows us to define new invariants of S that recover some of the known important cases such as Poincare invariants of Durr-Kabanov-Okonek and the stable pair invariants of Kool......-Thomas. In the case of the nested Hilbert scheme of points, we can express these invariants in terms of integrals over the products of Hilbert scheme of points on S, and relate them to the vertex operator formulas found by Carlsson-Okounkov. The virtual fundamental classes of the nested Hilbert schemes play a crucial...

  18. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  19. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  20. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  1. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  2. On the representation of contextual probabilistic dynamics in the complex Hilbert space: Linear and nonlinear evolutions, Schrodinger dynamics

    International Nuclear Information System (INIS)

    Khrennikov, A.

    2005-01-01

    We constructed the representation of contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function can be considered as Hilbert space projection of realistic dynamics in a pre space. The basic condition for representing the pre space-dynamics is the law of statistical conservation of energy-conservation of probabilities. The construction of the dynamical representation is an important step in the development of contextual statistical viewpoint of quantum processes. But the contextual statistical model is essentially more general than the quantum one. Therefore in general the Hilbert space projection of the pre space dynamics can be nonlinear and even irreversible (but it is always unitary). There were found conditions of linearity and reversibility of the Hilbert space dynamical projection. We also found conditions for the conventional Schrodinger dynamics (including time-dependent Hamiltonians). We remark that in general even the Schrodinger dynamics is based just on the statistical conservation of energy; for individual systems the law of conservation of energy can be violated (at least in our theoretical model)

  3. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  4. Three-photon polarization ququarts: polarization, entanglement and Schmidt decompositions

    International Nuclear Information System (INIS)

    Fedorov, M V; Miklin, N I

    2015-01-01

    We consider polarization states of three photons, propagating collinearly and having equal given frequencies but with arbitrary distributed horizontal or vertical polarizations of photons. A general form of such states is a superposition of four basic three-photon polarization modes, to be referred to as the three-photon polarization ququarts (TPPQ). All such states can be considered as consisting of one- and two-photon parts, which can be entangled with each other. The degrees of entanglement and polarization, as well as the Schmidt decomposition and Stokes vectors of TPPQ are found and discussed. (paper)

  5. "Meester" GFWM Schmidt (1818-1885): skepper van muurtekste en ...

    African Journals Online (AJOL)

    "Meester" G.F.W.M. Schmidt (1818-1885): vernacular artist of mural texts and family trees G.F.W.M. Schmidt was born in The Hague, Netherlands in 1818. After serving in the army for 21 years, he was honourably discharged in 1857. In the 1870's he transferred under unknown circumstances to the district of Fraserburg ...

  6. Berhard Schmidt - realiteet müütide vastu / Ülo Tonts

    Index Scriptorium Estoniae

    Tonts, Ülo, 1931-2016

    1996-01-01

    Arvustus: Optical illusions. The life story of Bernhard Schmidt the great stellar optician of the twentieth century by Erik Schmidt. Estonian Academy Publishers, 1995. Ka samast teemast Jaan Krossi 'Vastutuulelaevas'

  7. Optimal Entanglement Witnesses for Qubits and Qutrits

    International Nuclear Information System (INIS)

    Bertlmann, R.A.; Durstberger, K.; Hiesmayr, B.C.; Krammer, P.

    2005-01-01

    Full text: We give a review of the connection between an optimal entanglement witness and the Hilbert-Schmidt measure of entanglement (that is the minimal distance of an entangled state to the set of separable states): a generalized Bell inequality is derived within the concept of entanglement witnesses, in the sense that a violation of the inequality detects entanglement and not non-locality liKEX usual Bell inequalities do. It can be seen that the maximal violation equals the Hilbert-Schmidt measure. Furthermore, since finding the nearest separable state to a given entangled state is rather difficult, a method for checking an estimated nearest separable state is presented. This is illustrated with isotropic qubit and qutrit states; the Hilbert-Schmidt measure, the optimal entanglement witness and the maximal violation of the GBI are calculated for those cases. Possible generalizations for arbitrary dimensions are discussed. (author)

  8. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  9. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  10. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  11. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  12. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  13. A constructive presentation of rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Celeghini, Enrico

    2015-01-01

    We construct a rigged Hilbert space for the square integrable functions on the line L2(R) adding to the generators of the Weyl-Heisenberg algebra a new discrete operator, related to the degree of the Hermite polynomials. All together, continuous and discrete operators, constitute the generators of the projective algebra io(2). L 2 (R) and the vector space of the line R are shown to be isomorphic representations of such an algebra and, as both these representations are irreducible, all operators defined on the rigged Hilbert spaces L 2 (R) or R are shown to belong to the universal enveloping algebra of io(2). The procedure can be extended to orthogonal and pseudo-orthogonal spaces of arbitrary dimension by tensorialization.Circumventing all formal problems the paper proposes a kind of toy model, well defined from a mathematical point of view, of rigged Hilbert spaces where, in contrast with the Hilbert spaces, operators with different cardinality are allowed. (paper)

  14. Recipes for stable linear embeddings from Hilbert spaces to R^m

    OpenAIRE

    Puy, Gilles; Davies, Michael; Gribonval, Remi

    2017-01-01

    We consider the problem of constructing a linear map from a Hilbert space H (possibly infinite dimensional) to Rm that satisfies a restricted isometry property (RIP) on an arbitrary signal model, i.e., a subset of H. We present a generic framework that handles a large class of low-dimensional subsets but also unstructured and structured linear maps. We provide a simple recipe to prove that a random linear map satisfies a general RIP with high probability. We also describe a generic technique ...

  15. Treatment of electrochemical noise data by the Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Rahier, A.

    2009-01-01

    Most of the classical approaches for treating electro-chemical noise (ECN) data suffer from the non-linear and non steady-state character of the delivered signal. Very often, the link between time and the local corrosion events supposedly responsible for ECN data signatures is lost during treatment, as is obvious when using the classical Fourier Transform (FT), followed by an analysis of the response in the frequency domain. In this particular case, the information directly related to the corrosion events is distributed into the full spectra, thereby preventing the operator to derive clear and precise conclusions. In 2005, we suggested an alternative data treatment based on the Hilbert-Huang transform (HHT). The latter keeps track of the time variable and copes with non-linear and non steady-state behaviours of the system under examination. In 2006, we demonstrated the applicability of the newly proposed data treatment in the case of ECN data collected under BWR (Boiling Water Reactor) conditions. In 2007, we collected additional ECN data and started a preliminary investigation of two mathematical restrictions that are susceptible to impair the interpretation of the results. We discovered a possible modification of the Hilbert transform allowing generating controlled phase shifts that are different from pi/2 as is always the case for the Hilbert transform

  16. Experimental validation of a structural damage detection method based on marginal Hilbert spectrum

    Science.gov (United States)

    Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh

    2017-04-01

    Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.

  17. Quantum Hilbert Hotel.

    Science.gov (United States)

    Potoček, Václav; Miatto, Filippo M; Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; Liapis, Andreas C; Oi, Daniel K L; Boyd, Robert W; Jeffers, John

    2015-10-16

    In 1924 David Hilbert conceived a paradoxical tale involving a hotel with an infinite number of rooms to illustrate some aspects of the mathematical notion of "infinity." In continuous-variable quantum mechanics we routinely make use of infinite state spaces: here we show that such a theoretical apparatus can accommodate an analog of Hilbert's hotel paradox. We devise a protocol that, mimicking what happens to the guests of the hotel, maps the amplitudes of an infinite eigenbasis to twice their original quantum number in a coherent and deterministic manner, producing infinitely many unoccupied levels in the process. We demonstrate the feasibility of the protocol by experimentally realizing it on the orbital angular momentum of a paraxial field. This new non-Gaussian operation may be exploited, for example, for enhancing the sensitivity of NOON states, for increasing the capacity of a channel, or for multiplexing multiple channels into a single one.

  18. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  19. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  20. On the dimension of subspaces with bounded Schmidt rank

    International Nuclear Information System (INIS)

    Cubitt, Toby; Montanaro, Ashley; Winter, Andreas

    2008-01-01

    We consider the question of how large a subspace of a given bipartite quantum system can be when the subspace contains only highly entangled states. This is motivated in part by results of Hayden et al. [e-print arXiv:quant-ph/0407049; Commun. Math. Phys., 265, 95 (2006)], which show that in large dxd-dimensional systems there exist random subspaces of dimension almost d 2 , all of whose states have entropy of entanglement at least log d-O(1). It is also a generalization of results on the dimension of completely entangled subspaces, which have connections with the construction of unextendible product bases. Here we take as entanglement measure the Schmidt rank, and determine, for every pair of local dimensions d A and d B , and every r, the largest dimension of a subspace consisting only of entangled states of Schmidt rank r or larger. This exact answer is a significant improvement on the best bounds that can be obtained using the random subspace techniques in Hayden et al. We also determine the converse: the largest dimension of a subspace with an upper bound on the Schmidt rank. Finally, we discuss the question of subspaces containing only states with Schmidt equal to r

  1. Hilbert schemes of points and infinite dimensional Lie algebras

    CERN Document Server

    Qin, Zhenbo

    2018-01-01

    Hilbert schemes, which parametrize subschemes in algebraic varieties, have been extensively studied in algebraic geometry for the last 50 years. The most interesting class of Hilbert schemes are schemes X^{[n]} of collections of n points (zero-dimensional subschemes) in a smooth algebraic surface X. Schemes X^{[n]} turn out to be closely related to many areas of mathematics, such as algebraic combinatorics, integrable systems, representation theory, and mathematical physics, among others. This book surveys recent developments of the theory of Hilbert schemes of points on complex surfaces and its interplay with infinite dimensional Lie algebras. It starts with the basics of Hilbert schemes of points and presents in detail an example of Hilbert schemes of points on the projective plane. Then the author turns to the study of cohomology of X^{[n]}, including the construction of the action of infinite dimensional Lie algebras on this cohomology, the ring structure of cohomology, equivariant cohomology of X^{[n]} a...

  2. Open superstring field theory on the restricted Hilbert space

    International Nuclear Information System (INIS)

    Konopka, Sebastian; Sachs, Ivo

    2016-01-01

    It appears that the formulation of an action for the Ramond sector of open superstring field theory requires to either restrict the Hilbert space for the Ramond sector or to introduce auxiliary fields with picture −3/2. The purpose of this note is to clarify the relation of the restricted Hilbert space with other approaches and to formulate open superstring field theory entirely in the small Hilbert space.

  3. Rigged Hilbert spaces for chaotic dynamical systems

    International Nuclear Information System (INIS)

    Suchanecki, Z.; Antoniou, I.; Bandtlow, O.F.

    1996-01-01

    We consider the problem of rigging for the Koopman operators of the Renyi and the baker maps. We show that the rigged Hilbert space for the Renyi maps has some of the properties of a strict inductive limit and give a detailed description of the rigged Hilbert space for the baker maps. copyright 1996 American Institute of Physics

  4. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  5. The method of moments and nested Hilbert spaces in quantum mechanics

    International Nuclear Information System (INIS)

    Adeniyi Bangudu, E.

    1980-08-01

    It is shown how the structures of a nested Hilbert space Hsub(I), associated with a given Hilbert space Hsub(O), may be used to simplify our understanding of the effects of parameters, whose values have to be chosen rather than determined variationally, in the method of moments. The result, as applied to non-relativistic quartic oscillator and helium atom, is to associate the parameters with sequences of Hilbert spaces, while the error of the method of moments relative to the variational method corresponds to a nesting operator of the nested Hilbert space. Difficulties hindering similar interpretations in terms of rigged Hilbert space structures are highlighted. (author)

  6. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  7. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  8. Lectures on Hilbert schemes of points on surfaces

    CERN Document Server

    Nakajima, Hiraku

    1999-01-01

    This beautifully written book deals with one shining example: the Hilbert schemes of points on algebraic surfaces ... The topics are carefully and tastefully chosen ... The young person will profit from reading this book. --Mathematical Reviews The Hilbert scheme of a surface X describes collections of n (not necessarily distinct) points on X. More precisely, it is the moduli space for 0-dimensional subschemes of X of length n. Recently it was realized that Hilbert schemes originally studied in algebraic geometry are closely related to several branches of mathematics, such as singularities, symplectic geometry, representation theory--even theoretical physics. The discussion in the book reflects this feature of Hilbert schemes. One example of the modern, broader interest in the subject is a construction of the representation of the infinite-dimensional Heisenberg algebra, i.e., Fock space. This representation has been studied extensively in the literature in connection with affine Lie algebras, conformal field...

  9. Convexity Of Inversion For Positive Operators On A Hilbert Space

    International Nuclear Information System (INIS)

    Sangadji

    2001-01-01

    This paper discusses and proves three theorems for positive invertible operators on a Hilbert space. The first theorem gives a comparison of the generalized arithmetic mean, generalized geometric mean, and generalized harmonic mean for positive invertible operators on a Hilbert space. For the second and third theorems each gives three inequalities for positive invertible operators on a Hilbert space that are mutually equivalent

  10. Hilbert-Twin – A Novel Hilbert Transform-Based Method To Compute Envelope Of Free Decaying Oscillations Embedded In Noise, And The Logarithmic Decrement In High-Resolution Mechanical Spectroscopy HRMS

    Directory of Open Access Journals (Sweden)

    Magalas L.B.

    2015-06-01

    Full Text Available In this work, we present a novel Hilbert-twin method to compute an envelope and the logarithmic decrement, δ, from exponentially damped time-invariant harmonic strain signals embedded in noise. The results obtained from five computing methods: (1 the parametric OMI (Optimization in Multiple Intervals method, two interpolated discrete Fourier transform-based (IpDFT methods: (2 the Yoshida-Magalas (YM method and (3 the classic Yoshida (Y method, (4 the novel Hilbert-twin (H-twin method based on the Hilbert transform, and (5 the conventional Hilbert transform (HT method are analyzed and compared. The fundamental feature of the Hilbert-twin method is the efficient elimination of intrinsic asymmetrical oscillations of the envelope, aHT (t, obtained from the discrete Hilbert transform of analyzed signals. Excellent performance in estimation of the logarithmic decrement from the Hilbert-twin method is comparable to that of the OMI and YM for the low- and high-damping levels. The Hilbert-twin method proved to be robust and effective in computing the logarithmic decrement and the resonant frequency of exponentially damped free decaying signals embedded in experimental noise. The Hilbert-twin method is also appropriate to detect nonlinearities in mechanical loss measurements of metals and alloys.

  11. Mathematical foundations of the projection-operator method

    International Nuclear Information System (INIS)

    Moore, S.M.

    1979-01-01

    Mathematical foundations are determined for the projection-operator method developed by Zwanzig and Mori and used in the study of cooperative phenomena in non-equilibrium processes. It is shown that the Hilbert space of operators can be taken as the Hilbert-Schmidt class. Comments are made on the possibility of a complete formulation of quantum mechanics in terms of this Hilbert space. (author)

  12. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  13. Spontaneous emission and quantum discord: Comparison of Hilbert–Schmidt and trace distance discord

    Energy Technology Data Exchange (ETDEWEB)

    Jakóbczyk, Lech, E-mail: ljak@ift.uni.wroc.pl

    2014-09-12

    Hilbert–Schmidt and trace norm geometric quantum discord are compared with regard to their behavior during local time evolution. We consider the system of independent two-level atoms with time evolution given by the dissipative process of spontaneous emission. It is explicitly shown that the Hilbert–Schmidt norm discord has nonphysical properties with respect to such local evolution and cannot serve as a reasonable measure of quantum correlations and the better choice is to use trace norm discord as such a measure. - Highlights: • We compare Hilbert–Schmidt and trace norm geometric quantum discord. • We consider the system of independent two-level atoms with time evolution given by spontaneous emission. • We show explicitly that Hilbert–Schmidt norm discord has nonphysical properties.

  14. Transverse entanglement migration in Hilbert space

    International Nuclear Information System (INIS)

    Chan, K. W.; Torres, J. P.; Eberly, J. H.

    2007-01-01

    We show that, although the amount of mutual entanglement of photons propagating in free space is fixed, the type of correlations between the photons that determine the entanglement can dramatically change during propagation. We show that this amounts to a migration of entanglement in Hilbert space, rather than real space. For the case of spontaneous parametric down-conversion, the migration of entanglement in transverse coordinates takes place from modulus to phase of the biphoton state and back again. We propose an experiment to observe this migration in Hilbert space and to determine the full entanglement

  15. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  16. Generalized Schmidt decomposability and its relation to projective norms in multipartite entanglement

    International Nuclear Information System (INIS)

    Sokoli, Florian; Alber, Gernot

    2014-01-01

    Projective norms are capable of measuring entanglement of multipartite quantum states. However, typically, the explicit computation of these distance-based geometric entanglement monotones is very difficult even for finite dimensional systems. Motivated by the significance of Schmidt decompositions for our quantitative understanding of bipartite quantum entanglement, a generalization of this concept to multipartite scenarios is proposed, in the sense that generalized Schmidt decomposability of a multipartite pure state implies that its projective norm can be calculated in a simple way analogous to the bipartite case. Thus, this concept of generalized Schmidt decomposability of multipartite quantum states is linked in a natural way to projective norms as entanglement monotones. Therefore, it may not only be a convenient tool for calculations, but may also shed new light onto the intricate features of multipartite entanglement in an analogous way as the ‘classical’ Schmidt decomposition does for bipartite quantum systems. (paper)

  17. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  18. Hilbert schemes of points on some classes surface singularities

    OpenAIRE

    Gyenge, Ádám

    2016-01-01

    We study the geometry and topology of Hilbert schemes of points on the orbifold surface [C^2/G], respectively the singular quotient surface C^2/G, where G is a finite subgroup of SL(2,C) of type A or D. We give a decomposition of the (equivariant) Hilbert scheme of the orbifold into affine space strata indexed by a certain combinatorial set, the set of Young walls. The generating series of Euler characteristics of Hilbert schemes of points of the singular surface of type A or D is computed in...

  19. Lectures on Hilbert modular varieties and modular forms

    CERN Document Server

    Goren, Eyal Z

    2001-01-01

    This book is devoted to certain aspects of the theory of p-adic Hilbert modular forms and moduli spaces of abelian varieties with real multiplication. The theory of p-adic modular forms is presented first in the elliptic case, introducing the reader to key ideas of N. M. Katz and J.-P. Serre. It is re-interpreted from a geometric point of view, which is developed to present the rudiments of a similar theory for Hilbert modular forms. The theory of moduli spaces of abelian varieties with real multiplication is presented first very explicitly over the complex numbers. Aspects of the general theory are then exposed, in particular, local deformation theory of abelian varieties in positive characteristic. The arithmetic of p-adic Hilbert modular forms and the geometry of moduli spaces of abelian varieties are related. This relation is used to study q-expansions of Hilbert modular forms, on the one hand, and stratifications of moduli spaces on the other hand. The book is addressed to graduate students and non-exper...

  20. Alternative probability theories for cognitive psychology.

    Science.gov (United States)

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  1. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  2. A note on tensor fields in Hilbert spaces

    Directory of Open Access Journals (Sweden)

    LEONARDO BILIOTTI

    2002-06-01

    Full Text Available We discuss and extend to infinite dimensional Hilbert spaces a well-known tensoriality criterion for linear endomorphisms of the space of smooth vector fields in n.Discutimos e estendemos para espaços de Hilbert um critério de tensorialidade para endomorfismos do espaço dos campos vetoriais em Rpot(n.

  3. Improved specimen reconstruction by Hilbert phase contrast tomography.

    Science.gov (United States)

    Barton, Bastian; Joos, Friederike; Schröder, Rasmus R

    2008-11-01

    The low signal-to-noise ratio (SNR) in images of unstained specimens recorded with conventional defocus phase contrast makes it difficult to interpret 3D volumes obtained by electron tomography (ET). The high defocus applied for conventional tilt series generates some phase contrast but leads to an incomplete transfer of object information. For tomography of biological weak-phase objects, optimal image contrast and subsequently an optimized SNR are essential for the reconstruction of details such as macromolecular assemblies at molecular resolution. The problem of low contrast can be partially solved by applying a Hilbert phase plate positioned in the back focal plane (BFP) of the objective lens while recording images in Gaussian focus. Images recorded with the Hilbert phase plate provide optimized positive phase contrast at low spatial frequencies, and the contrast transfer in principle extends to the information limit of the microscope. The antisymmetric Hilbert phase contrast (HPC) can be numerically converted into isotropic contrast, which is equivalent to the contrast obtained by a Zernike phase plate. Thus, in-focus HPC provides optimal structure factor information without limiting effects of the transfer function. In this article, we present the first electron tomograms of biological specimens reconstructed from Hilbert phase plate image series. We outline the technical implementation of the phase plate and demonstrate that the technique is routinely applicable for tomography. A comparison between conventional defocus tomograms and in-focus HPC volumes shows an enhanced SNR and an improved specimen visibility for in-focus Hilbert tomography.

  4. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  5. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  6. Venemaa õllekeiser Christian Ramm-Schmidt / Markku Saksa

    Index Scriptorium Estoniae

    Saksa, Markku

    2004-01-01

    Rahvusvahelise õlletootmisettevõtte Baltic Beverages Holding (BBH) tegevusest Baltikumis, Venemaal, Ukrainas ja Kasahstanis. Venemaa tütarettevõtte juht Christian Ramm-Schmidt kirjeldab Venemaa eraettevõtluse arengut, ärikeskkonda ja -kultuuri ning ettevõtete juhtimise põhimõtteid

  7. Terahertz bandwidth all-optical Hilbert transformers based on long-period gratings.

    Science.gov (United States)

    Ashrafi, Reza; Azaña, José

    2012-07-01

    A novel, all-optical design for implementing terahertz (THz) bandwidth real-time Hilbert transformers is proposed and numerically demonstrated. An all-optical Hilbert transformer can be implemented using a uniform-period long-period grating (LPG) with a properly designed amplitude-only grating apodization profile, incorporating a single π-phase shift in the middle of the grating length. The designed LPG-based Hilbert transformers can be practically implemented using either fiber-optic or integrated-waveguide technologies. As a generalization, photonic fractional Hilbert transformers are also designed based on the same optical platform. In this general case, the resulting LPGs have multiple π-phase shifts along the grating length. Our numerical simulations confirm that all-optical Hilbert transformers capable of processing arbitrary optical signals with bandwidths well in the THz range can be implemented using feasible fiber/waveguide LPG designs.

  8. Connes distance function on fuzzy sphere and the connection between geometry and statistics

    International Nuclear Information System (INIS)

    Devi, Yendrembam Chaoba; Chakraborty, Biswajit; Prajapat, Shivraj; Mukhopadhyay, Aritra K.; Scholtz, Frederik G.

    2015-01-01

    An algorithm to compute Connes spectral distance, adaptable to the Hilbert-Schmidt operatorial formulation of non-commutative quantum mechanics, was developed earlier by introducing the appropriate spectral triple and used to compute infinitesimal distances in the Moyal plane, revealing a deep connection between geometry and statistics. In this paper, using the same algorithm, the Connes spectral distance has been calculated in the Hilbert-Schmidt operatorial formulation for the fuzzy sphere whose spatial coordinates satisfy the su(2) algebra. This has been computed for both the discrete and the Perelemov’s SU(2) coherent state. Here also, we get a connection between geometry and statistics which is shown by computing the infinitesimal distance between mixed states on the quantum Hilbert space of a particular fuzzy sphere, indexed by n ∈ ℤ/2

  9. Neighboring Structure Visualization on a Grid-based Layout.

    Science.gov (United States)

    Marcou, G; Horvath, D; Varnek, A

    2017-10-01

    Here, we describe an algorithm to visualize chemical structures on a grid-based layout in such a way that similar structures are neighboring. It is based on structure reordering with the help of the Hilbert Schmidt Independence Criterion, representing an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator. The method can be applied to any layout of bi- or three-dimensional shape. The approach is demonstrated on a set of dopamine D5 ligands visualized on squared, disk and spherical layouts. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Theory of linear operators in Hilbert space

    CERN Document Server

    Akhiezer, N I

    1993-01-01

    This classic textbook by two mathematicians from the USSR's prestigious Kharkov Mathematics Institute introduces linear operators in Hilbert space, and presents in detail the geometry of Hilbert space and the spectral theory of unitary and self-adjoint operators. It is directed to students at graduate and advanced undergraduate levels, but because of the exceptional clarity of its theoretical presentation and the inclusion of results obtained by Soviet mathematicians, it should prove invaluable for every mathematician and physicist. 1961, 1963 edition.

  11. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  12. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  13. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  14. Scalar transport across the turbulent/non-turbulent interface in jets: Schmidt number effects

    Science.gov (United States)

    Silva, Tiago S.; B. da Silva, Carlos; Idmec Team

    2016-11-01

    The dynamics of a passive scalar field near a turbulent/non-turbulent interface (TNTI) is analysed through direct numerical simulations (DNS) of turbulent planar jets, with Reynolds numbers ranging from 142 <= Reλ <= 246 , and Schmidt numbers from 0 . 07 <= Sc <= 7 . The steepness of the scalar gradient, as observed from conditional profiles near the TNTI, increases with the Schmidt number. Conditional scalar gradient budgets show that for low and moderate Schmidt numbers a diffusive superlayer emerges at the TNTI, where the scalar gradient diffusion dominates, while the production is negligible. For low Schmidt numbers the growth of the turbulent front is commanded by the molecular diffusion, whereas the scalar gradient convection is negligible. The authors acknowledge the Laboratory for Advanced Computing at University of Coimbra for providing HPC, computing, consulting resources that have contributed to the research results reported within this paper. URL http://www.lca.uc.pt.

  15. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  16. ON STRONG AND WEAK CONVERGENCE IN n-HILBERT SPACES

    Directory of Open Access Journals (Sweden)

    Agus L. Soenjaya

    2014-03-01

    Full Text Available We discuss the concepts of strong and weak convergence in n-Hilbert spaces and study their properties. Some examples are given to illustrate the concepts. In particular, we prove an analogue of Banach-Saks-Mazur theorem and Radon-Riesz property in the case of n-Hilbert space.

  17. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  18. A Variable Turbulent Schmidt Number Formulation for Scramjet Application

    Science.gov (United States)

    Xiao, X.; Edwards, J. R.; Hassan, H. A.; Cutler, A. D.

    2004-01-01

    In high speed engines, thorough turbulent mixing of fuel and air is required to obtain high performance and high efficiency. Thus, the ability to predict turbulent mixing is crucial in obtaining accurate numerical simulation of an engine and its performance. Current state of the art in CFD simulation is to assume both turbulent Prandtl number and Schmidt numbers to be constants. However, since the mixing of fuel and air is inversely proportional to the Schmidt number, a value of 0.45 for the Schmidt number will produce twice as much diffusion as that with a value of 0.9. Because of this, current CFD tools and models have not been able to provide the needed guidance required for the efficient design of a scramjet engine. The goal of this investigation is to develop the framework needed to calculate turbulent Prandtl and Schmidt numbers as part of the solution. This requires four additional equations: two for the temperature variance and its dissipation rate and two for the concentration variance and its dissipation rate. In the current investigation emphasis will be placed on studying mixing without reactions. For such flows, variable Prandtl number does not play a major role in determining the flow. This, however, will have to be addressed when combustion is present. The approach to be used is similar to that used to develop the k-zeta model. In this approach, relevant equations are derived from the exact Navier-Stokes equations and each individual correlation is modeled. This ensures that relevant physics is incorporated into the model equations. This task has been accomplished. The final set of equations have no wall or damping functions. Moreover, they are tensorially consistent and Galilean invariant. The derivation of the model equations is rather lengthy and thus will not be incorporated into this abstract, but will be included in the final paper. As a preliminary to formulating the proposed model, the original k-zeta model with constant turbulent Prandtl and

  19. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.

    Science.gov (United States)

    Chan, Stephanie C Y; Niv, Yael; Norman, Kenneth A

    2016-07-27

    The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. Our world is governed by hidden (latent) causes that we cannot observe, but which generate the observations we see. A range of high-level cognitive processes require inference of a probability distribution (or "belief distribution") over the possible latent causes that might be generating our current observations. This is true for reinforcement learning and decision making (where the latent cause comprises the true "state" of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or "schema"). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex, an area that has been separately implicated in the representations of both states and schemas. Copyright © 2016 the authors 0270-6474/16/367817-12$15.00/0.

  20. Jump probabilities in the non-Markovian quantum jump method

    International Nuclear Information System (INIS)

    Haerkoenen, Kari

    2010-01-01

    The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).

  1. Spectral Theory of Operators on Hilbert Spaces

    CERN Document Server

    Kubrusly, Carlos S

    2012-01-01

    This work is a concise introduction to spectral theory of Hilbert space operators. Its emphasis is on recent aspects of theory and detailed proofs, with the primary goal of offering a modern introductory textbook for a first graduate course in the subject. The coverage of topics is thorough, as the book explores various delicate points and hidden features often left untreated. Spectral Theory of Operators on Hilbert Space is addressed to an interdisciplinary audience of graduate students in mathematics, statistics, economics, engineering, and physics. It will also be useful to working mathemat

  2. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  3. Universality of Schmidt decomposition and particle identity

    Science.gov (United States)

    Sciara, Stefania; Lo Franco, Rosario; Compagno, Giuseppe

    2017-03-01

    Schmidt decomposition is a widely employed tool of quantum theory which plays a key role for distinguishable particles in scenarios such as entanglement characterization, theory of measurement and state purification. Yet, its formulation for identical particles remains controversial, jeopardizing its application to analyze general many-body quantum systems. Here we prove, using a newly developed approach, a universal Schmidt decomposition which allows faithful quantification of the physical entanglement due to the identity of particles. We find that it is affected by single-particle measurement localization and state overlap. We study paradigmatic two-particle systems where identical qubits and qutrits are located in the same place or in separated places. For the case of two qutrits in the same place, we show that their entanglement behavior, whose physical interpretation is given, differs from that obtained before by different methods. Our results are generalizable to multiparticle systems and open the way for further developments in quantum information processing exploiting particle identity as a resource.

  4. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  5. Multisymplectic unified formalism for Einstein-Hilbert gravity

    Science.gov (United States)

    Gaset, Jordi; Román-Roy, Narciso

    2018-03-01

    We present a covariant multisymplectic formulation for the Einstein-Hilbert model of general relativity. As it is described by a second-order singular Lagrangian, this is a gauge field theory with constraints. The use of the unified Lagrangian-Hamiltonian formalism is particularly interesting when it is applied to these kinds of theories, since it simplifies the treatment of them, in particular, the implementation of the constraint algorithm, the retrieval of the Lagrangian description, and the construction of the covariant Hamiltonian formalism. In order to apply this algorithm to the covariant field equations, they must be written in a suitable geometrical way, which consists of using integrable distributions, represented by multivector fields of a certain type. We apply all these tools to the Einstein-Hilbert model without and with energy-matter sources. We obtain and explain the geometrical and physical meaning of the Lagrangian constraints and we construct the multimomentum (covariant) Hamiltonian formalisms in both cases. As a consequence of the gauge freedom and the constraint algorithm, we see how this model is equivalent to a first-order regular theory, without gauge freedom. In the case of the presence of energy-matter sources, we show how some relevant geometrical and physical characteristics of the theory depend on the type of source. In all the cases, we obtain explicitly multivector fields which are solutions to the gravitational field equations. Finally, a brief study of symmetries and conservation laws is done in this context.

  6. A New Method for Non-linear and Non-stationary Time Series Analysis:
    The Hilbert Spectral Analysis

    CERN Multimedia

    CERN. Geneva

    2000-01-01

    A new method for analysing non-linear and non-stationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero crossing and extreme, and also having symmetric envelopes defined by the local maximal and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to non-linear and non-stationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Classical non-l...

  7. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  8. On Hilbert space of paths

    International Nuclear Information System (INIS)

    Exner, P.; Kolerov, G.I.

    1980-01-01

    A Hilbert space of paths, the elements of which are determined by trigonometric series, was proposed and used recently by Truman. This space is shown to consist precisely of all absolutely continuous paths ending in the origin with square-integrable derivatives

  9. Hilbert's 'Foundations of Physics': Gravitation and electromagnetism within the axiomatic method

    Science.gov (United States)

    Brading, K. A.; Ryckman, T. A.

    2008-01-01

    In November and December 1915, Hilbert presented two communications to the Göttingen Academy of Sciences under the common title 'The Foundations of Physics'. Versions of each eventually appeared in the Nachrichten of the Academy. Hilbert's first communication has received significant reconsideration in recent years, following the discovery of printer's proofs of this paper, dated 6 December 1915. The focus has been primarily on the 'priority dispute' over the Einstein field equations. Our contention, in contrast, is that the discovery of the December proofs makes it possible to see the thematic linkage between the material that Hilbert cut from the published version of the first communication and the content of the second, as published in 1917. The latter has been largely either disregarded or misinterpreted, and our aim is to show that (a) Hilbert's two communications should be regarded as part of a wider research program within the overarching framework of 'the axiomatic method' (as Hilbert expressly stated was the case), and (b) the second communication is a fine and coherent piece of work within this framework, whose principal aim is to address an apparent tension between general invariance and causality (in the precise sense of Cauchy determination), pinpointed in Theorem I of the first communication. This is not the same problem as that found in Einstein's 'hole argument'-something that, we argue, never confused Hilbert.

  10. Isometric Reflection Vectors and Characterizations of Hilbert Spaces

    Directory of Open Access Journals (Sweden)

    Donghai Ji

    2014-01-01

    Full Text Available A known characterization of Hilbert spaces via isometric reflection vectors is based on the following implication: if the set of isometric reflection vectors in the unit sphere SX of a Banach space X has nonempty interior in SX, then X is a Hilbert space. Applying a recent result based on well-known theorem of Kronecker from number theory, we improve this by substantial reduction of the set of isometric reflection vectors needed in the hypothesis.

  11. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  12. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  13. F-actin distribution at nodes of Ranvier and Schmidt-Lanterman incisures in mammalian sciatic nerves.

    Science.gov (United States)

    Kun, Alejandra; Canclini, Lucía; Rosso, Gonzalo; Bresque, Mariana; Romeo, Carlos; Hanusz, Alicia; Cal, Karina; Calliari, Aldo; Sotelo Silveira, José; Sotelo, José R

    2012-07-01

    Very little is known about the function of the F-actin cytoskeleton in the regeneration and pathology of peripheral nerve fibers. The actin cytoskeleton has been associated with maintenance of tissue structure, transmission of traction and contraction forces, and an involvement in cell motility. Therefore, the state of the actin cytoskeleton strongly influences the mechanical properties of cells and intracellular transport therein. In this work, we analyze the distribution of F-actin at Schmidt-Lanterman Incisures (SLI) and nodes of Ranvier (NR) domains in normal, regenerating and pathologic Trembler J (TrJ/+) sciatic nerve fibers, of rats and mice. F-actin was quantified and it was found increased in TrJ/+, both in SLI and NR. However, SLI and NR of regenerating rat sciatic nerve did not show significant differences in F-actin, as compared with normal nerves. Cytochalasin-D and Latrunculin-A were used to disrupt the F-actin network in normal and regenerating rat sciatic nerve fibers. Both drugs disrupt F-actin, but in different ways. Cytochalasin-D did not disrupt Schwann cell (SC) F-actin at the NR. Latrunculin-A did not disrupt F-actin at the boundary region between SC and axon at the NR domain. We surmise that the rearrangement of F-actin in neurological disorders, as presented here, is an important feature of TrJ/+ pathology as a Charcot-Marie-Tooth (CMT) model. Copyright © 2012 Wiley Periodicals, Inc.

  14. Four-dimensional hilbert curves for R-trees

    DEFF Research Database (Denmark)

    Haverkort, Herman; Walderveen, Freek van

    2011-01-01

    Two-dimensional R-trees are a class of spatial index structures in which objects are arranged to enable fast window queries: report all objects that intersect a given query window. One of the most successful methods of arranging the objects in the index structure is based on sorting the objects...... according to the positions of their centers along a two-dimensional Hilbert space-filling curve. Alternatively, one may use the coordinates of the objects' bounding boxes to represent each object by a four-dimensional point, and sort these points along a four-dimensional Hilbert-type curve. In experiments...

  15. Hilbert schemes of points and Heisenberg algebras

    International Nuclear Information System (INIS)

    Ellingsrud, G.; Goettsche, L.

    2000-01-01

    Let X [n] be the Hilbert scheme of n points on a smooth projective surface X over the complex numbers. In these lectures we describe the action of the Heisenberg algebra on the direct sum of the cohomologies of all the X [n] , which has been constructed by Nakajima. In the second half of the lectures we study the relation of the Heisenberg algebra action and the ring structures of the cohomologies of the X [n] , following recent work of Lehn. In particular we study the Chern and Segre classes of tautological vector bundles on the Hilbert schemes X [n] . (author)

  16. Liquid identification by Hilbert spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Lyatti, M; Divin, Y; Poppe, U; Urban, K, E-mail: M.Lyatti@fz-juelich.d, E-mail: Y.Divin@fz-juelich.d [Forschungszentrum Juelich, 52425 Juelich (Germany)

    2009-11-15

    Fast and reliable identification of liquids is of great importance in, for example, security, biology and the beverage industry. An unambiguous identification of liquids can be made by electromagnetic measurements of their dielectric functions in the frequency range of their main dispersions, but this frequency range, from a few GHz to a few THz, is not covered by any conventional spectroscopy. We have developed a concept of liquid identification based on our new Hilbert spectroscopy and high- T{sub c} Josephson junctions, which can operate at the intermediate range from microwaves to THz frequencies. A demonstration setup has been developed consisting of a polychromatic radiation source and a compact Hilbert spectrometer integrated in a Stirling cryocooler. Reflection polychromatic spectra of various bottled liquids have been measured at the spectral range of 15-300 GHz with total scanning time down to 0.2 s and identification of liquids has been demonstrated.

  17. Liquid identification by Hilbert spectroscopy

    Science.gov (United States)

    Lyatti, M.; Divin, Y.; Poppe, U.; Urban, K.

    2009-11-01

    Fast and reliable identification of liquids is of great importance in, for example, security, biology and the beverage industry. An unambiguous identification of liquids can be made by electromagnetic measurements of their dielectric functions in the frequency range of their main dispersions, but this frequency range, from a few GHz to a few THz, is not covered by any conventional spectroscopy. We have developed a concept of liquid identification based on our new Hilbert spectroscopy and high- Tc Josephson junctions, which can operate at the intermediate range from microwaves to THz frequencies. A demonstration setup has been developed consisting of a polychromatic radiation source and a compact Hilbert spectrometer integrated in a Stirling cryocooler. Reflection polychromatic spectra of various bottled liquids have been measured at the spectral range of 15-300 GHz with total scanning time down to 0.2 s and identification of liquids has been demonstrated.

  18. Liquid identification by Hilbert spectroscopy

    International Nuclear Information System (INIS)

    Lyatti, M; Divin, Y; Poppe, U; Urban, K

    2009-01-01

    Fast and reliable identification of liquids is of great importance in, for example, security, biology and the beverage industry. An unambiguous identification of liquids can be made by electromagnetic measurements of their dielectric functions in the frequency range of their main dispersions, but this frequency range, from a few GHz to a few THz, is not covered by any conventional spectroscopy. We have developed a concept of liquid identification based on our new Hilbert spectroscopy and high- T c Josephson junctions, which can operate at the intermediate range from microwaves to THz frequencies. A demonstration setup has been developed consisting of a polychromatic radiation source and a compact Hilbert spectrometer integrated in a Stirling cryocooler. Reflection polychromatic spectra of various bottled liquids have been measured at the spectral range of 15-300 GHz with total scanning time down to 0.2 s and identification of liquids has been demonstrated.

  19. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  20. Application of Arbitrary-Order Hilbert Spectral Analysis to Passive Scalar Turbulence

    International Nuclear Information System (INIS)

    Huang, Y X; Lu, Z M; Liu, Y L; Schmitt, F G; Gagne, Y

    2011-01-01

    In previous work [Huang et al., PRE 82, 26319, 2010], we found that the passive scalar turbulence field maybe less intermittent than what we believed before. Here we apply the same method, namely arbitrary-order Hilbert spectral analysis, to a passive scalar (temperature) time series with a Taylor's microscale Reynolds number Re λ ≅ 3000. We find that with increasing Reynolds number, the discrepancy of scaling exponents between Hilbert ξ θ (q) and Kolmogorov-Obukhov-Corrsin (KOC) theory is increasing, and consequently the discrepancy between Hilbert and structure function could disappear at infinite Reynolds number.

  1. Frames and outer frames for Hilbert C^*-modules

    OpenAIRE

    Arambašić, Ljiljana; Bakić, Damir

    2015-01-01

    The goal of the present paper is to extend the theory of frames for countably generated Hilbert $C^*$-modules over arbitrary $C^*$-algebras. In investigating the non-unital case we introduce the concept of outer frame as a sequence in the multiplier module $M(X)$ that has the standard frame property when applied to elements of the ambient module $X$. Given a Hilbert $\\A$-module $X$, we prove that there is a bijective correspondence of the set of all adjointable surjections from the generalize...

  2. Operator entanglement of two-qubit joint unitary operations revisited: Schmidt number approach

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Hui-Zhi; Li, Chao; Yang, Qing; Yang, Ming, E-mail: mingyang@ahu.edu.cn [Key Laboratory of Opto-electronic Information Acquisition and Manipulation, Ministry of Education, School of Physics and Material Science, Anhui University Hefei (China); Cao, Zhuo-Liang [School of Electronic Information Engineering, Hefei Normal University (China)

    2012-08-15

    The operator entanglement of two-qubit joint unitary operations is revisited. The Schmidt number, an important attribute of a two-qubit unitary operation, may have connection with the entanglement measure of the unitary operator. We find that the entanglement measure of a two-qubit unitary operators is classified by the Schmidt number of the unitary operators. We also discuss the exact relation between the operator entanglement and the parameters of the unitary operator. (author)

  3. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  4. Transition probability spaces in loop quantum gravity

    Science.gov (United States)

    Guo, Xiao-Kan

    2018-03-01

    We study the (generalized) transition probability spaces, in the sense of Mielnik and Cantoni, for spacetime quantum states in loop quantum gravity. First, we show that loop quantum gravity admits the structures of transition probability spaces. This is exemplified by first checking such structures in covariant quantum mechanics and then identifying the transition probability spaces in spin foam models via a simplified version of general boundary formulation. The transition probability space thus defined gives a simple way to reconstruct the discrete analog of the Hilbert space of the canonical theory and the relevant quantum logical structures. Second, we show that the transition probability space and in particular the spin foam model are 2-categories. Then we discuss how to realize in spin foam models two proposals by Crane about the mathematical structures of quantum gravity, namely, the quantum topos and causal sites. We conclude that transition probability spaces provide us with an alternative framework to understand various foundational questions of loop quantum gravity.

  5. H-SLAM: Rao-Blackwellized Particle Filter SLAM Using Hilbert Maps

    Directory of Open Access Journals (Sweden)

    Guillem Vallicrosa

    2018-05-01

    Full Text Available Occupancy Grid maps provide a probabilistic representation of space which is important for a variety of robotic applications like path planning and autonomous manipulation. In this paper, a SLAM (Simultaneous Localization and Mapping framework capable of obtaining this representation online is presented. The H-SLAM (Hilbert Maps SLAM is based on Hilbert Map representation and uses a Particle Filter to represent the robot state. Hilbert Maps offer a continuous probabilistic representation with a small memory footprint. We present a series of experimental results carried both in simulation and with real AUVs (Autonomous Underwater Vehicles. These results demonstrate that our approach is able to represent the environment more consistently while capable of running online.

  6. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  7. Topological freeness for Hilbert bimodules

    DEFF Research Database (Denmark)

    Kwasniewski, Bartosz

    2014-01-01

    It is shown that topological freeness of Rieffel’s induced representation functor implies that any C*-algebra generated by a faithful covariant representation of a Hilbert bimodule X over a C*-algebra A is canonically isomorphic to the crossed product A ⋊ X ℤ. An ideal lattice description...

  8. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  9. Feynman quasi probability distribution for spin-(1/2), and its generalizations

    International Nuclear Information System (INIS)

    Colucci, M.

    1999-01-01

    It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects

  10. Influence of dose distribution homogeneity on the tumor control probability in heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Wen Xiaoqiong; Li Qiang; Zhou Guangming; Li Wenjian; Wei Zengquan

    2001-01-01

    In order to estimate the influence of the un-uniform dose distribution on the clinical treatment result, the Influence of dose distribution homogeneity on the tumor control probability was investigated. Basing on the formula deduced previously for survival fraction of cells irradiated by the un-uniform heavy-ion irradiation field and the theory of tumor control probability, the tumor control probability was calculated for a tumor mode exposed to different dose distribution homogeneity. The results show that the tumor control probability responding to the same total dose will decrease if the dose distribution homogeneity gets worse. In clinical treatment, the dose distribution homogeneity should be better than 95%

  11. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  12. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  13. A relative Hilbert-Mumford criterion

    DEFF Research Database (Denmark)

    Gulbrandsen, Martin G.; Halle, Lars Halvard; Hulek, Klaus

    2015-01-01

    We generalize the classical Hilbert-Mumford criteria for GIT (semi-)stability in terms of one parameter subgroups of a linearly reductive group G over a field k, to the relative situation of an equivariant, projective morphism X -> Spec A to a noetherian k-algebra A. We also extend the classical...

  14. Resonances, scattering theory and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Parravicini, G.; Gorini, V.; Sudarshan, E.C.G.

    1979-01-01

    The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free, in, and out eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian; the singularities of the out eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of complete sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the out eigenvectors. The free, in and out eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee-Friedrichs model. 48 references

  15. On an inequality for non-normal operators

    International Nuclear Information System (INIS)

    Duggal, B.P.

    1992-07-01

    Starting from the inequality proved by Takayuki Furuta for a dominant operator A on a complex Hilbert space H, which extends to all operators such that the pure part of A has empty point spectrum, it is shown that if A is a contraction (on a separable complex Hilbert space) with simple eigenvalues and C 0 completely non-unitary part, and if (1-A*A) 1/2 is of Hilbert-Schmidt class, then the said inequality holds for A. 8 refs

  16. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  17. Wavelet Based Hilbert Transform with Digital Design and Application to QCM-SS Watermarking

    Directory of Open Access Journals (Sweden)

    S. P. Maity

    2008-04-01

    Full Text Available In recent time, wavelet transforms are used extensively for efficient storage, transmission and representation of multimedia signals. Hilbert transform pairs of wavelets is the basic unit of many wavelet theories such as complex filter banks, complex wavelet and phaselet etc. Moreover, Hilbert transform finds various applications in communications and signal processing such as generation of single sideband (SSB modulation, quadrature carrier multiplexing (QCM and bandpass representation of a signal. Thus wavelet based discrete Hilbert transform design draws much attention of researchers for couple of years. This paper proposes an (i algorithm for generation of low computation cost Hilbert transform pairs of symmetric filter coefficients using biorthogonal wavelets, (ii approximation to its rational coefficients form for its efficient hardware realization and without much loss in signal representation, and finally (iii development of QCM-SS (spread spectrum image watermarking scheme for doubling the payload capacity. Simulation results show novelty of the proposed Hilbert transform design and its application to watermarking compared to existing algorithms.

  18. Generalization of Poisson distribution for the case of changing probability of consequential events

    International Nuclear Information System (INIS)

    Kushnirenko, E.

    1995-01-01

    The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation

  19. Notes on Hilbert and Cauchy Matrices

    Czech Academy of Sciences Publication Activity Database

    Fiedler, Miroslav

    2010-01-01

    Roč. 432, č. 1 (2010), s. 351-356 ISSN 0024-3795 Institutional research plan: CEZ:AV0Z10300504 Keywords : Hilbert matrix * Cauchy matrix * combined matrix * AT-property Subject RIV: BA - General Mathematics Impact factor: 1.005, year: 2010

  20. Bearing fault detection utilizing group delay and the Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Jin, Shuai; Lee, Sang-Kwon

    2017-01-01

    Vibration signals measured from a mechanical system are useful to detect system faults. Signal processing has been used to extract fault information in bearing systems. However, a wide vibration signal frequency band often affects the ability to obtain the effective fault features. In addition, a few oscillation components are not useful at the entire frequency band in a vibration signal. By contrast, useful fatigue information can be embedded in the noise oscillation components. Thus, a method to estimate which frequency band contains fault information utilizing group delay was proposed in this paper. Group delay as a measure of phase distortion can indicate the phase structure relationship in the frequency domain between original (with noise) and denoising signals. We used the empirical mode decomposition of a Hilbert-Huang transform to sift the useful intrinsic mode functions based on the results of group delay after determining the valuable frequency band. Finally, envelope analysis and the energy distribution after the Hilbert transform were used to complete the fault diagnosis. The practical bearing fault data, which were divided into inner and outer race faults, were used to verify the efficiency and quality of the proposed method

  1. Bearing fault detection utilizing group delay and the Hilbert-Huang transform

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuai; Lee, Sang-Kwon [Inha University, Incheon (Korea, Republic of)

    2017-03-15

    Vibration signals measured from a mechanical system are useful to detect system faults. Signal processing has been used to extract fault information in bearing systems. However, a wide vibration signal frequency band often affects the ability to obtain the effective fault features. In addition, a few oscillation components are not useful at the entire frequency band in a vibration signal. By contrast, useful fatigue information can be embedded in the noise oscillation components. Thus, a method to estimate which frequency band contains fault information utilizing group delay was proposed in this paper. Group delay as a measure of phase distortion can indicate the phase structure relationship in the frequency domain between original (with noise) and denoising signals. We used the empirical mode decomposition of a Hilbert-Huang transform to sift the useful intrinsic mode functions based on the results of group delay after determining the valuable frequency band. Finally, envelope analysis and the energy distribution after the Hilbert transform were used to complete the fault diagnosis. The practical bearing fault data, which were divided into inner and outer race faults, were used to verify the efficiency and quality of the proposed method.

  2. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  3. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

    International Nuclear Information System (INIS)

    Chu Guanquan; Wang Jinhui

    2012-01-01

    Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

  4. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  5. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  6. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  7. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    International Nuclear Information System (INIS)

    Loubenets, Elena R.

    2015-01-01

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence of this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)

  8. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  9. Differentiable absorption of Hilbert C*-modules, connections and lifts of unbounded operators

    DEFF Research Database (Denmark)

    Kaad, Jens

    2017-01-01

    . The differentiable absorption theorem is then applied to construct densely defined connections (or correpondences) on Hilbert C∗C∗-modules. These connections can in turn be used to define selfadjoint and regular "lifts" of unbounded operators which act on an auxiliary Hilbert C∗C∗-module....

  10. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  11. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  12. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  13. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  14. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  15. PENGARUH GUGUS p-METOKSI PADA REAKSI KONDENSASI CLAYSEN-SCHMIDT MENGGUNAKAN METODA GRINDING

    Directory of Open Access Journals (Sweden)

    Karim Theresih

    2016-10-01

      This research aims to synthesize the compound dibenzalaceton, 4-methoksikalkon and dianisalaceton through Claysen Schmidt condensation reaction with grinding method and to determine the effect of p-methoxy groups on the reaction. Dibenzalaceton compound was synthesized from benzaldehyde, acetone, and NaOH. Synthesis of compound 4-metoksikhalkon was done using 4-methoxybenzaldehyde, acetophenone, and NaOH. Dianisalceton compound was synthesized through Claysen-schmidt reaction between acetone, anisaldehide, and the catalysts NaOH. This synthesis were performed through solvent-free grinding method. Catalyst base material and simultaneously crushed in mortar for 15 minutes to form a paste. The pasta is dried and recrystallized. The resulted compounds were characterized by TLC, FTIR and GC-MS. Based on the results of the analysis of FTIR and GC-MS showed that dibenzalaceton, 4-methoksikhalkon and dianisalaceton can be synthesized and have succession yield 59.93%, 86.21% and 70.39% . There is the influence of p-methoxy groups in a condensation reaction Claysen-Schmidt on the synthesis of compounds dibenzalaceton, 4-methoksikhalkon and dianizalaceton use grinding method.   Keywords: dibenzalaceton, 4-methoksikhalkon, dianizalaceton, grinding method

  16. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    International Nuclear Information System (INIS)

    Xu, Jie; Wang, Xin; Zhan, Qi; Huang, Shengling; Chen, Yifan; Mu, Baozhong

    2016-01-01

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system based on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.

  17. Measurement of vibration mode shape by using Hilbert transform

    International Nuclear Information System (INIS)

    Kang, Min Sig

    2001-01-01

    This paper concerns on modal analysis of mechanical structures by using a continuous scanning laser Doppler vibrometer. In modal analysis the Hilbert transform based approach is superior to the Fourier transform based approach because of its fine accuracy and its flexible experimental settings. In this paper the Hilbert transform based approach is extended to measure area mode shape data of a structure by simply modifying the scanning pattern ranging the entire surface of the structure. The effectiveness of this proposed method is illustrated along with results of numerical simulation for a rectangular plate

  18. Energy-dependent pole expansions for the effective potentials in the four-body integral equations with tensor forces

    International Nuclear Information System (INIS)

    Sofianos, S.; Fiedeldey, H.; Haberzettl, H.

    1980-01-01

    We investigate the accuracy of the energy-dependent pole expansion for the (3+1) and (2+2) subamplitudes in the calculation of the binding energy of the α particle, E/sub α/, for separable NN potentials with tensor components. We employ the truncated t-matrix (t 00 ) approximation and compare our results for E/sub α/ to those obtained, independent of any separable expansion, by Gibson and Lehman and to the results for E/sub α/ obtained with the Hilbert-Schmidt expansion of the subamplitudes. It is shown that the energy-dependent pole expansion is both more economical and converges faster than the Hilbert-Schmidt expansion, even one term of the energy-dependent pole approximation already being accurate to better than 1.5%

  19. Hilbert space methods in partial differential equations

    CERN Document Server

    Showalter, Ralph E

    1994-01-01

    This graduate-level text opens with an elementary presentation of Hilbert space theory sufficient for understanding the rest of the book. Additional topics include boundary value problems, evolution equations, optimization, and approximation.1979 edition.

  20. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  1. Electronographic calibration of UK 1.2-m Schmidt plates

    International Nuclear Information System (INIS)

    Hawkins, M.R.S.

    1979-01-01

    Two electronographic sequences are given in the South Galactic Pole region down to msub(B) = approximately 23 +- 0.3 mag. These sequences are used to obtain a calibration for COSMOS measures of UK 1.2-m Schmidt plates and evaluate their photometric transfer properties. (author)

  2. Typification of Zaluzianskya villosa F. W. Schmidt (Scrophulariaceae-Manuleae)

    Czech Academy of Sciences Publication Activity Database

    Kirschner, Jan

    2009-01-01

    Roč. 75, č. 3 (2009), s. 588-590 ISSN 0254-6299 R&D Projects: GA MŠk LC06073 Institutional research plan: CEZ:AV0Z60050516 Keywords : F. W. Schmidt * herbarium PRC * nomenclature Subject RIV: EF - Botanics Impact factor: 1.080, year: 2009

  3. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  4. Probabilistic Q-function distributions in fermionic phase-space

    International Nuclear Information System (INIS)

    Rosales-Zárate, Laura E C; Drummond, P D

    2015-01-01

    We obtain a positive probability distribution or Q-function for an arbitrary fermionic many-body system. This is different to previous Q-function proposals, which were either restricted to a subspace of the overall Hilbert space, or used Grassmann methods that do not give probabilities. The fermionic Q-function obtained here is constructed using normally ordered Gaussian operators, which include both non-interacting thermal density matrices and BCS states. We prove that the Q-function exists for any density matrix, is real and positive, and has moments that correspond to Fermi operator moments. It is defined on a finite symmetric phase-space equivalent to the space of real, antisymmetric matrices. This has the natural SO(2M) symmetry expected for Majorana fermion operators. We show that there is a physical interpretation of the Q-function: it is the relative probability for observing a given Gaussian density matrix. The distribution has a uniform probability across the space at infinite temperature, while for pure states it has a maximum value on the phase-space boundary. The advantage of probabilistic representations is that they can be used for computational sampling without a sign problem. (fast track communication)

  5. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  6. Probability distribution of long-run indiscriminate felling of trees in ...

    African Journals Online (AJOL)

    The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

  7. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  8. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  9. Flanged Bombardier beetles from Shanghai, China, with description of a new species in the genus Eustra Schmidt-Goebel (Coleoptera, Carabidae, Paussinae

    Directory of Open Access Journals (Sweden)

    Xiao-Bin Song

    2018-02-01

    Full Text Available Four paussine species belonging to three different genera are discovered in Shanghai. A new species, Eustra shanghaiensis Song, sp. n., is described, illustrated, and distinguished from the treated congeners. New distributional data or biological notes on Eustra chinensis Bänninger, 1949, Itamus castaneus Schmidt-Goebel, 1846, and Platyrhopalus davidis Fairmaire, 1886 are provided.

  10. Quantum Hilbert matrices and orthogonal polynomials

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Berg, Christian

    2009-01-01

    Using the notion of quantum integers associated with a complex number q≠0 , we define the quantum Hilbert matrix and various extensions. They are Hankel matrices corresponding to certain little q -Jacobi polynomials when |q|<1 , and for the special value they are closely related to Hankel matrice...

  11. Invariant Hilbert spaces of holomorphic functions

    NARCIS (Netherlands)

    Faraut, J; Thomas, EGF

    1999-01-01

    A Hilbert space of holomorphic functions on a complex manifold Z, which is invariant under a group G of holomorphic automorphisms of Z, can be decomposed into irreducible subspaces by using Choquet theory. We give a geometric condition on Z and G which implies that this decomposition is multiplicity

  12. Noise properties of Hilbert transform evaluation

    Czech Academy of Sciences Publication Activity Database

    Pavlíček, Pavel; Svak, V.

    2015-01-01

    Roč. 26, č. 8 (2015), s. 085207 ISSN 0957-0233 R&D Projects: GA ČR GA13-12301S Institutional support: RVO:68378271 Keywords : Hilbert transform * noise * measurement uncertainty * white -light interferometry * fringe-pattern analysis Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.492, year: 2015

  13. Terahertz bandwidth photonic Hilbert transformers based on synthesized planar Bragg grating fabrication.

    Science.gov (United States)

    Sima, Chaotan; Gates, J C; Holmes, C; Mennea, P L; Zervas, M N; Smith, P G R

    2013-09-01

    Terahertz bandwidth photonic Hilbert transformers are proposed and experimentally demonstrated. The integrated device is fabricated via a direct UV grating writing technique in a silica-on-silicon platform. The photonic Hilbert transformer operates at bandwidths of up to 2 THz (~16 nm) in the telecom band, a 10-fold greater bandwidth than any previously reported experimental approaches. Achieving this performance requires detailed knowledge of the system transfer function of the direct UV grating writing technique; this allows improved linearity and yields terahertz bandwidth Bragg gratings with improved spectral quality. By incorporating a flat-top reflector and Hilbert grating with a waveguide coupler, an ultrawideband all-optical single-sideband filter is demonstrated.

  14. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  15. On the Values for the Turbulent Schmidt Number in Environmental Flows

    Directory of Open Access Journals (Sweden)

    Carlo Gualtieri

    2017-04-01

    Full Text Available Computational Fluid Dynamics (CFD has consolidated as a tool to provide understanding and quantitative information regarding many complex environmental flows. The accuracy and reliability of CFD modelling results oftentimes come under scrutiny because of issues in the implementation of and input data for those simulations. Regarding the input data, if an approach based on the Reynolds-Averaged Navier-Stokes (RANS equations is applied, the turbulent scalar fluxes are generally estimated by assuming the standard gradient diffusion hypothesis (SGDH, which requires the definition of the turbulent Schmidt number, Sct (the ratio of momentum diffusivity to mass diffusivity in the turbulent flow. However, no universally-accepted values of this parameter have been established or, more importantly, methodologies for its computation have been provided. This paper firstly presents a review of previous studies about Sct in environmental flows, involving both water and air systems. Secondly, three case studies are presented where the key role of a correct parameterization of the turbulent Schmidt number is pointed out. These include: (1 transverse mixing in a shallow water flow; (2 tracer transport in a contact tank; and (3 sediment transport in suspension. An overall picture on the use of the Schmidt number in CFD emerges from the paper.

  16. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  17. κ-Minkowski representations on Hilbert spaces

    International Nuclear Information System (INIS)

    Agostini, Alessandra

    2007-01-01

    The algebra of functions on κ-Minkowski noncommutative space-time is studied as algebra of operators on Hilbert spaces. The representations of this algebra are constructed and classified. This new approach leads to a natural construction of integration in κ-Minkowski space-time in terms of the usual trace of operators

  18. Hilbert space theory of classical electrodynamics

    Indian Academy of Sciences (India)

    Hilbert space; Koopman–von Neumann theory; classical electrodynamics. PACS No. 03.50. ... The paper is divided into four sections. Section 2 .... construction of Sudarshan is to be contrasted with that of Koopman and von Neumann. ..... ture from KvN and [16] in this formulation is to define new momentum and coordinate.

  19. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  20. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  1. Semiclassical propagation: Hilbert space vs. Wigner representation

    Science.gov (United States)

    Gottwald, Fabian; Ivanov, Sergei D.

    2018-03-01

    A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert space and their recently developed counterparts in Wigner representation is presented. Based on this viewpoint, the Wigner Herman-Kluk propagator is conceptually the most general one. Nonetheless, the respective semiclassical expressions for expectation values in terms of the density matrix and the Wigner function are mathematically proven here to coincide. The only remaining difference is a mere technical flexibility of the Wigner version in choosing the Gaussians' width for the underlying coherent states beyond minimal uncertainty. This flexibility is investigated numerically on prototypical potentials and it turns out to provide neither qualitative nor quantitative improvements. Given the aforementioned generality, utilizing the Wigner representation for semiclassical propagation thus leads to the same performance as employing the respective most-developed (Hilbert-space) methods for the density matrix.

  2. Magnetomyographic recording and identification of uterine contractions using Hilbert-wavelet transforms

    International Nuclear Information System (INIS)

    Furdea, A; Wilson, J D; Eswaran, H; Lowery, C L; Govindan, R B; Preissl, H

    2009-01-01

    We propose a multi-stage approach using Wavelet and Hilbert transforms to identify uterine contraction bursts in magnetomyogram (MMG) signals measured using a 151 magnetic sensor array. In the first stage, we decompose the MMG signals by wavelet analysis into multilevel approximate and detail coefficients. In each level, the signals are reconstructed using the detail coefficients followed by the computation of the Hilbert transform. The Hilbert amplitude of the reconstructed signals from different frequency bands (0.1–1 Hz) is summed up over all the sensors to increase the signal-to-noise ratio. Using a novel clustering technique, affinity propagation, the contractile bursts are distinguished from the noise level. The method is applied on simulated MMG data, using a simple stochastic model to determine its robustness and to seven MMG datasets

  3. DNS of passive scalar transport in turbulent channel flow at high Schmidt numbers

    International Nuclear Information System (INIS)

    Schwertfirm, Florian; Manhart, Michael

    2007-01-01

    We perform DNS of passive scalar transport in low Reynolds number turbulent channel flow at Schmidt numbers up to Sc = 49. The high resolutions required to resolve the scalar concentration fields at such Schmidt numbers are achieved by a hierarchical algorithm in which only the scalar fields are solved on the grid dictated by the Batchelor scale. The velocity fields are solved on coarser grids and prolonged by a conservative interpolation to the fine-grid. The trends observed so far at lower Schmidt numbers Sc ≤ 10 are confirmed, i.e. the mean scalar gradient steepens at the wall with increasing Schmidt number, the peaks of turbulent quantities increase and move towards the wall. The instantaneous scalar fields show a dramatic change. Observable structures get longer and thinner which is connected with the occurrence of steeper gradients, but the wall concentrations penetrate less deeply into the plateau in the core of the channel. Our data shows that the thickness of the conductive sublayer, as defined by the intersection point of the linear with the logarithmic asymptote scales with Sc -0.29 . With this information it is possible to derive an expression for the dimensionless transfer coefficient K + which is only dependent on Sc and Re τ . This expression is in full accordance to previous results which demonstrates that the thickness of the conductive sublayer is the dominating quantity for the mean scalar profile

  4. DNS of passive scalar transport in turbulent channel flow at high Schmidt numbers

    Energy Technology Data Exchange (ETDEWEB)

    Schwertfirm, Florian [Fachgebiet Hydromechanik, Technische Universitaet Muenchen, Arcisstr. 21, 80337 Muenchen (Germany); Manhart, Michael [Fachgebiet Hydromechanik, Technische Universitaet Muenchen, Arcisstr. 21, 80337 Muenchen (Germany)], E-mail: m.manhart@bv.tum.de

    2007-12-15

    We perform DNS of passive scalar transport in low Reynolds number turbulent channel flow at Schmidt numbers up to Sc = 49. The high resolutions required to resolve the scalar concentration fields at such Schmidt numbers are achieved by a hierarchical algorithm in which only the scalar fields are solved on the grid dictated by the Batchelor scale. The velocity fields are solved on coarser grids and prolonged by a conservative interpolation to the fine-grid. The trends observed so far at lower Schmidt numbers Sc {<=} 10 are confirmed, i.e. the mean scalar gradient steepens at the wall with increasing Schmidt number, the peaks of turbulent quantities increase and move towards the wall. The instantaneous scalar fields show a dramatic change. Observable structures get longer and thinner which is connected with the occurrence of steeper gradients, but the wall concentrations penetrate less deeply into the plateau in the core of the channel. Our data shows that the thickness of the conductive sublayer, as defined by the intersection point of the linear with the logarithmic asymptote scales with Sc{sup -0.29}. With this information it is possible to derive an expression for the dimensionless transfer coefficient K{sup +} which is only dependent on Sc and Re{sub {tau}}. This expression is in full accordance to previous results which demonstrates that the thickness of the conductive sublayer is the dominating quantity for the mean scalar profile.

  5. Quantum Fourier transform, Heisenberg groups and quasi-probability distributions

    International Nuclear Information System (INIS)

    Patra, Manas K; Braunstein, Samuel L

    2011-01-01

    This paper aims to explore the inherent connection between Heisenberg groups, quantum Fourier transform (QFT) and (quasi-probability) distribution functions. Distribution functions for continuous and finite quantum systems are examined from three perspectives and all of them lead to Weyl-Gabor-Heisenberg groups. The QFT appears as the intertwining operator of two equivalent representations arising out of an automorphism of the group. Distribution functions correspond to certain distinguished sets in the group algebra. The marginal properties of a particular class of distribution functions (Wigner distributions) arise from a class of automorphisms of the group algebra of the Heisenberg group. We then study the reconstruction of the Wigner function from the marginal distributions via inverse Radon transform giving explicit formulae. We consider some applications of our approach to quantum information processing and quantum process tomography.

  6. Schmidt games and Markov partitions

    International Nuclear Information System (INIS)

    Tseng, Jimmy

    2009-01-01

    Let T be a C 2 -expanding self-map of a compact, connected, C ∞ , Riemannian manifold M. We correct a minor gap in the proof of a theorem from the literature: the set of points whose forward orbits are nondense has full Hausdorff dimension. Our correction allows us to strengthen the theorem. Combining the correction with Schmidt games, we generalize the theorem in dimension one: given a point x 0 in M, the set of points whose forward orbit closures miss x 0 is a winning set. Finally, our key lemma, the no matching lemma, may be of independent interest in the theory of symbolic dynamics or the theory of Markov partitions

  7. New family of probability distributions with applications to Monte Carlo studies

    International Nuclear Information System (INIS)

    Johnson, M.E.; Tietjen, G.L.; Beckman, R.J.

    1980-01-01

    A new probability distribution is presented that offers considerable potential for providing stochastic inputs to Monte Carlo simulation studies. The distribution includes the exponential power family as a special case. An efficient computational strategy is proposed for random variate generation. An example for testing the hypothesis of unit variance illustrates the advantages of the proposed distribution

  8. Critical Assessment Of The Issues In The Application Of Hilbert Transform To Compute The Logarithmic Decrement

    Directory of Open Access Journals (Sweden)

    Majewski M.

    2015-06-01

    Full Text Available The parametric OMI (Optimization in Multiple Intervals, the Yoshida-Magalas (YM and a novel Hilbert-twin (H-twin methods are advocated for computing the logarithmic decrement in the field of internal friction and mechanical spectroscopy of solids. It is shown that dispersion in experimental points results mainly from the selection of the computing methods, the number of oscillations, and noise. It is demonstrated that conventional Hilbert transform method suffers from high dispersion in internal friction values. It is unequivocally demonstrated that the Hilbert-twin method, which yields a ‘true envelope’ for exponentially damped harmonic oscillations is superior to conventional Hilbert transform method. The ‘true envelope’ of free decaying strain signals calculated from the Hilbert-twin method yields excellent estimation of the logarithmic decrement in metals, alloys, and solids.

  9. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  10. Elements of Hilbert spaces and operator theory

    CERN Document Server

    Vasudeva, Harkrishan Lal

    2017-01-01

    The book presents an introduction to the geometry of Hilbert spaces and operator theory, targeting graduate and senior undergraduate students of mathematics. Major topics discussed in the book are inner product spaces, linear operators, spectral theory and special classes of operators, and Banach spaces. On vector spaces, the structure of inner product is imposed. After discussing geometry of Hilbert spaces, its applications to diverse branches of mathematics have been studied. Along the way are introduced orthogonal polynomials and their use in Fourier series and approximations. Spectrum of an operator is the key to the understanding of the operator. Properties of the spectrum of different classes of operators, such as normal operators, self-adjoint operators, unitaries, isometries and compact operators have been discussed. A large number of examples of operators, along with their spectrum and its splitting into point spectrum, continuous spectrum, residual spectrum, approximate point spectrum and compressio...

  11. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  12. Sexualität im Werk Arno Schmidts

    OpenAIRE

    Reischert, Jessica

    2006-01-01

    Die Sexualität im Frühwerk Arno Schmidts stellt ein umfangreiches und komplexes Thema dar, das dennoch auf gewisse Grundmuster und –vorgänge reduziert werden kann. So haben sich bei der Begegnung von Menschen untereinander klare Linien ergeben, anhand derer viele Gespräche eingeordnet und analysiert werden können. Unterschieden werden können mehrere Gesprächstypen, in denen sich bestimmte Verhaltensweisen der Schmidtschen Protagonisten zeigen: In den geschlechtlich gemischten Gesprächsrunden ...

  13. Vertical integration from the large Hilbert space

    Science.gov (United States)

    Erler, Theodore; Konopka, Sebastian

    2017-12-01

    We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.

  14. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  15. Oscillatory integrals on Hilbert spaces and Schroedinger equation with magnetic fields

    International Nuclear Information System (INIS)

    Albeverio, S.; Brzezniak, Z.

    1994-01-01

    We extend the theory of oscillatory integrals on Hilbert spaces (the mathematical version of ''Feynman path integrals'') to cover more general integrable functions, preserving the property of the integrals to have converging finite dimensional approximations. We give an application to the representation of solutions of the time dependent Schroedinger equation with a scalar and a magnetic potential by oscillatory integrals on Hilbert spaces. A relation with Ramer's functional in the corresponding probabilistic setting is found. (orig.)

  16. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  17. Effects of Schmidt number on near-wall turbulent mass transfer in pipe flow

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chang Woo; Yang, Kyung Soo [Inha University, Incheon (Korea, Republic of)

    2014-12-15

    Large Eddy simulation (LES) of turbulent mass transfer in circular-pipe flow has been performed to investigate the characteristics of turbulent mass transfer in the near-wall region. We consider a fully-developed turbulent pipe flow with a constant wall concentration. The Reynolds number under consideration is Re{sub r} = 500 based on the friction velocity and the pipe radius, and the selected Schmidt numbers (Sc) are 0.71, 5, 10, 20 and 100. Dynamic subgrid-scale (SGS) models for the turbulent SGS stresses and turbulent mass fluxes were employed to close the governing equations. The current paper reports a comprehensive characterization of turbulent mass transfer in circular-pipe flow, focusing on its near-wall characteristics and Sc dependency. We start with mean fields by presenting mean velocity and concentration profiles, mean Sherwood numbers and mean mass transfer coefficients for the selected values of the parameters. After that, we present the characteristics of fluctuations including root-mean-square (rms) profiles of velocity, concentration, and mass transfer coefficient fluctuations. Turbulent mass fluxes and correlations between velocity and concentration fluctuations are also discussed. The near-wall behaviour of turbulent diffusivity and turbulent Schmidt number is shown, and other authors' correlations on their limiting behaviour towards the pipe wall are evaluated based on our LES results. The intermittent characteristics of turbulent mass transfer in pipe flow are depicted by probability density functions (pdf) of velocity and concentration fluctuations; joint pdfs between them are also presented. Instantaneous snapshots of velocity and concentration fluctuations are shown to supplement our discussion on the turbulence statistics. Finally, we report the results of octant analysis and budget calculation of concentration variance to clarify Sc-dependency of the correlation between near-wall turbulence structures and concentration fluctuation in

  18. Effects of Schmidt number on near-wall turbulent mass transfer in pipe flow

    International Nuclear Information System (INIS)

    Kang, Chang Woo; Yang, Kyung Soo

    2014-01-01

    Large Eddy simulation (LES) of turbulent mass transfer in circular-pipe flow has been performed to investigate the characteristics of turbulent mass transfer in the near-wall region. We consider a fully-developed turbulent pipe flow with a constant wall concentration. The Reynolds number under consideration is Re r = 500 based on the friction velocity and the pipe radius, and the selected Schmidt numbers (Sc) are 0.71, 5, 10, 20 and 100. Dynamic subgrid-scale (SGS) models for the turbulent SGS stresses and turbulent mass fluxes were employed to close the governing equations. The current paper reports a comprehensive characterization of turbulent mass transfer in circular-pipe flow, focusing on its near-wall characteristics and Sc dependency. We start with mean fields by presenting mean velocity and concentration profiles, mean Sherwood numbers and mean mass transfer coefficients for the selected values of the parameters. After that, we present the characteristics of fluctuations including root-mean-square (rms) profiles of velocity, concentration, and mass transfer coefficient fluctuations. Turbulent mass fluxes and correlations between velocity and concentration fluctuations are also discussed. The near-wall behaviour of turbulent diffusivity and turbulent Schmidt number is shown, and other authors' correlations on their limiting behaviour towards the pipe wall are evaluated based on our LES results. The intermittent characteristics of turbulent mass transfer in pipe flow are depicted by probability density functions (pdf) of velocity and concentration fluctuations; joint pdfs between them are also presented. Instantaneous snapshots of velocity and concentration fluctuations are shown to supplement our discussion on the turbulence statistics. Finally, we report the results of octant analysis and budget calculation of concentration variance to clarify Sc-dependency of the correlation between near-wall turbulence structures and concentration fluctuation in the

  19. Elements of a function analytic approach to probability.

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert

    2008-02-01

    We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.

  20. Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics

    Science.gov (United States)

    Corry, Leo

    2018-04-01

    The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932. This article is part of the theme issue `Hilbert's sixth problem'.

  1. Hilbert's sixth problem: between the foundations of geometry and the axiomatization of physics.

    Science.gov (United States)

    Corry, Leo

    2018-04-28

    The sixth of Hilbert's famous 1900 list of 23 problems was a programmatic call for the axiomatization of the physical sciences. It was naturally and organically rooted at the core of Hilbert's conception of what axiomatization is all about. In fact, the axiomatic method which he applied at the turn of the twentieth century in his famous work on the foundations of geometry originated in a preoccupation with foundational questions related with empirical science in general. Indeed, far from a purely formal conception, Hilbert counted geometry among the sciences with strong empirical content, closely related to other branches of physics and deserving a treatment similar to that reserved for the latter. In this treatment, the axiomatization project was meant to play, in his view, a crucial role. Curiously, and contrary to a once-prevalent view, from all the problems in the list, the sixth is the only one that continually engaged Hilbet's efforts over a very long period of time, at least between 1894 and 1932.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  2. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  3. Eigenfunction expansions and scattering theory in rigged Hilbert spaces

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cubillo, F [Dpt. de Analisis Matematico, Universidad de Valladolid. Facultad de Ciencias, 47011 Valladolid (Spain)], E-mail: fgcubill@am.uva.es

    2008-08-15

    The work reviews some mathematical aspects of spectral properties, eigenfunction expansions and scattering theory in rigged Hilbert spaces, laying emphasis on Lippmann-Schwinger equations and Schroedinger operators.

  4. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  5. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  6. Application of Hilbert-Huang Transform in Generating Spectrum-Compatible Earthquake Time Histories

    OpenAIRE

    Ni, Shun-Hao; Xie, Wei-Chau; Pandey, Mahesh

    2011-01-01

    Spectrum-compatible earthquake time histories have been widely used for seismic analysis and design. In this paper, a data processing method, Hilbert-Huang transform, is applied to generate earthquake time histories compatible with the target seismic design spectra based on multiple actual earthquake records. Each actual earthquake record is decomposed into several components of time-dependent amplitude and frequency by Hilbert-Huang transform. The spectrum-compatible earthquake time history ...

  7. Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence

    Directory of Open Access Journals (Sweden)

    C. C. Wu

    2011-04-01

    Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.

  8. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  9. A numerical analysis of pollutant dispersion in street canyon: influence of the turbulent Schmidt number

    Directory of Open Access Journals (Sweden)

    Bouabdellah Abed

    2017-12-01

    Full Text Available Realizing the growing importance and availability of motor vehicles, we observe that the main source of pollution in the street canyons comes from the dispersion of automobile engine exhaust gas. It represents a substantial effect on the micro-climate conditions in urban areas. Seven idealized-2D building configurations are investigated by numerical simulations. The turbulent Schmidt number is introduced in the pollutant transport equation in order the take into account the proportion between the rate of momentum turbulent transport and the mass turbulent transport by diffusion. In the present paper, we attempt to approach the experimental test results by adjusting the values of turbulent Schmidt number to its corresponding application. It was with interest that we established this link for achieving our objectives, since the numerical results agree well with the experimental ones. The CFD code ANSYS CFX, the k, e and the RNGk-e models of turbulence have been adopted for the resolutions. From the simulation results, the turbulent Schmidt number is a range of 0.1 to 1.3 that has some effect on the prediction of pollutant dispersion in the street canyons. In the case of a flat roof canyon configuration (case: runa000, appropriate turbulent Schmidt number of 0.6 is estimated using the k-epsilon model and of 0.5 using the RNG k-e model.

  10. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  11. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  12. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  13. The joint probability distribution of structure factors incorporating anomalous-scattering and isomorphous-replacement data

    International Nuclear Information System (INIS)

    Peschar, R.; Schenk, H.

    1991-01-01

    A method to derive joint probability distributions of structure factors is presented which incorporates anomalous-scattering and isomorphous-replacement data in a unified procedure. The structure factors F H and F -H , whose magnitudes are different due to anomalous scattering, are shown to be isomorphously related. This leads to a definition of isomorphism by means of which isomorphous-replacement and anomalous-scattering data can be handled simultaneously. The definition and calculation of the general term of the joint probability distribution for isomorphous structure factors turns out to be crucial. Its analytical form leads to an algorithm by means of which any particular joint probability distribution of structure factors can be constructed. The calculation of the general term is discussed for the case of four isomorphous structure factors in P1, assuming the atoms to be independently and uniformly distributed. A main result is the construction of the probability distribution of the 64 triplet phase sums present in space group P1 amongst four isomorphous structure factors F H , four isomorphous F K and four isomorphous F -H-K . The procedure is readily generalized in the case where an arbitrary number of isomorphous structure factors are available for F H , F K and F -H-K . (orig.)

  14. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  15. Introduction to probability and measure theories

    International Nuclear Information System (INIS)

    Partasarati, K.

    1983-01-01

    Chapters of probability and measured theories are presented. The Borele images of spaces with the measure into each other and in separate metric spaces are studied. The Kolmogorov theorem on the continuation of probabilies is drawn from the theorem on the measure continuation to the projective limits of spaces with measure. The integration theory is plotted, measures on multiplications of spaces are studied. The theory of conventional mathematical expectations by projections in Hilbert space is presented. In conclusion, the theory of weak convergence of measures of elements of the theory of characteristic functions and the theory of invariant and quasi-invariant measures on groups and homogeneous spaces is given

  16. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    International Nuclear Information System (INIS)

    Marshman, Emily; Singh, Chandralekha

    2017-01-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations. (paper)

  17. Density operators in quantum mechanics

    International Nuclear Information System (INIS)

    Burzynski, A.

    1979-01-01

    A brief discussion and resume of density operator formalism in the way it occurs in modern physics (in quantum optics, quantum statistical physics, quantum theory of radiation) is presented. Particularly we emphasize the projection operator method, application of spectral theorems and superoperators formalism in operator Hilbert spaces (Hilbert-Schmidt type). The paper includes an appendix on direct sums and direct products of spaces and operators, and problems of reducibility for operator class by using the projection operators. (author)

  18. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  19. Novel microwave photonic fractional hilbert transformer using a ring resonator-based optical all-pass filter

    NARCIS (Netherlands)

    Zhuang, L.; Khan, M.R.H.; Beeker, Willem; Beeker, W.P.; Leinse, Arne; Heideman, Rene; Roeloffzen, C.G.H.

    2012-01-01

    We propose and demonstrate a novel wideband microwave photonic fractional Hilbert transformer implemented using a ring resonatorbased optical all-pass filter. The full programmability of the ring resonator allows variable and arbitrary fractional order of the Hilbert transformer. The performance

  20. Alternative structures and bi-Hamiltonian systems on a Hilbert space

    International Nuclear Information System (INIS)

    Marmo, G; Scolarici, G; Simoni, A; Ventriglia, F

    2005-01-01

    We discuss transformations generated by dynamical quantum systems which are bi-unitary, i.e. unitary with respect to a pair of Hermitian structures on an infinite-dimensional complex Hilbert space. We introduce the notion of Hermitian structures in generic relative position. We provide a few necessary and sufficient conditions for two Hermitian structures to be in generic relative position to better illustrate the relevance of this notion. The group of bi-unitary transformations is considered in both the generic and the non-generic case. Finally, we generalize the analysis to real Hilbert spaces and extend to infinite dimensions results already available in the framework of finite-dimensional linear bi-Hamiltonian systems

  1. Multimode Interference: Identifying Channels and Ridges in Quantum Probability Distributions

    OpenAIRE

    O'Connell, Ross C.; Loinaz, Will

    2004-01-01

    The multimode interference technique is a simple way to study the interference patterns found in many quantum probability distributions. We demonstrate that this analysis not only explains the existence of so-called "quantum carpets," but can explain the spatial distribution of channels and ridges in the carpets. With an understanding of the factors that govern these channels and ridges we have a limited ability to produce a particular pattern of channels and ridges by carefully choosing the ...

  2. Image decomposition model Shearlet-Hilbert-L2 with better performance for denoising in ESPI fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Su, Yonggang; Li, Biyuan; Lei, Zhenkun

    2018-02-01

    In this paper, we propose an image decomposition model Shearlet-Hilbert-L 2 with better performance for denoising in electronic speckle pattern interferometry (ESPI) fringe patterns. In our model, the low-density fringes, high-density fringes, and noise are, respectively, described by shearlet smoothness spaces, adaptive Hilbert space, and L 2 space and processed individually. Because the shearlet transform has superior directional sensitivity, our proposed Shearlet-Hilbert-L 2 model achieves commendable filtering results for various types of ESPI fringe patterns, including uniform density fringe patterns, moderately variable density fringe patterns, and greatly variable density fringe patterns. We evaluate the performance of our proposed Shearlet-Hilbert-L 2 model via application to two computer-simulated and nine experimentally obtained ESPI fringe patterns with various densities and poor quality. Furthermore, we compare our proposed model with windowed Fourier filtering and coherence-enhancing diffusion, both of which are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. We also compare our proposed model with the previous image decomposition model BL-Hilbert-L 2 .

  3. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  4. Ad Hoc Physical Hilbert Spaces in Quantum Mechanics

    Czech Academy of Sciences Publication Activity Database

    Fernandez, F. M.; Garcia, J.; Semorádová, Iveta; Znojil, Miloslav

    2015-01-01

    Roč. 54, č. 12 (2015), s. 4187-4203 ISSN 0020-7748 Institutional support: RVO:61389005 Keywords : quantum mechanics * physical Hilbert spaces * ad hoc inner product * singular potentials regularized * low lying energies Subject RIV: BE - Theoretical Physics Impact factor: 1.041, year: 2015

  5. On-line quantile regression in the RKHS (Reproducing Kernel Hilbert Space) for operational probabilistic forecasting of wind power

    International Nuclear Information System (INIS)

    Gallego-Castillo, Cristobal; Bessa, Ricardo; Cavalcante, Laura; Lopez-Garcia, Oscar

    2016-01-01

    Wind power probabilistic forecast is being used as input in several decision-making problems, such as stochastic unit commitment, operating reserve setting and electricity market bidding. This work introduces a new on-line quantile regression model based on the Reproducing Kernel Hilbert Space (RKHS) framework. Its application to the field of wind power forecasting involves a discussion on the choice of the bias term of the quantile models, and the consideration of the operational framework in order to mimic real conditions. Benchmark against linear and splines quantile regression models was performed for a real case study during a 18 months period. Model parameter selection was based on k-fold crossvalidation. Results showed a noticeable improvement in terms of calibration, a key criterion for the wind power industry. Modest improvements in terms of Continuous Ranked Probability Score (CRPS) were also observed for prediction horizons between 6 and 20 h ahead. - Highlights: • New online quantile regression model based on the Reproducing Kernel Hilbert Space. • First application to operational probabilistic wind power forecasting. • Modest improvements of CRPS for prediction horizons between 6 and 20 h ahead. • Noticeable improvements in terms of Calibration due to online learning.

  6. Spinors in Hilbert Space

    Science.gov (United States)

    Plymen, Roger; Robinson, Paul

    1995-01-01

    Infinite-dimensional Clifford algebras and their Fock representations originated in the quantum mechanical study of electrons. In this book, the authors give a definitive account of the various Clifford algebras over a real Hilbert space and of their Fock representations. A careful consideration of the latter's transformation properties under Bogoliubov automorphisms leads to the restricted orthogonal group. From there, a study of inner Bogoliubov automorphisms enables the authors to construct infinite-dimensional spin groups. Apart from assuming a basic background in functional analysis and operator algebras, the presentation is self-contained with complete proofs, many of which offer a fresh perspective on the subject.

  7. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  8. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  9. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  10. Generalized noncommutative Hardy and Hardy-Hilbert type inequalities

    DEFF Research Database (Denmark)

    Hansen, Frank; Krulic, Kristina; Pecaric, Josip

    2010-01-01

    We extend and unify several Hardy type inequalities to functions whose values are positive semi-definite operators. In particular, our methods lead to the operator versions of Hardy-Hilbert's and Godunova's inequalities. While classical Hardy type inequalities hold for parameter values p > 1, it ...

  11. Hilbert's Grand Hotel with a series twist

    Science.gov (United States)

    Wijeratne, Chanakya; Mamolo, Ami; Zazkis, Rina

    2014-08-01

    This paper presents a new twist on a familiar paradox, linking seemingly disparate ideas under one roof. Hilbert's Grand Hotel, a paradox which addresses infinite set comparisons is adapted and extended to incorporate ideas from calculus - namely infinite series. We present and resolve several variations, and invite the reader to explore his or her own variations.

  12. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  13. The p-sphere and the geometric substratum of power-law probability distributions

    International Nuclear Information System (INIS)

    Vignat, C.; Plastino, A.

    2005-01-01

    Links between power law probability distributions and marginal distributions of uniform laws on p-spheres in R n show that a mathematical derivation of the Boltzmann-Gibbs distribution necessarily passes through power law ones. Results are also given that link parameters p and n to the value of the non-extensivity parameter q that characterizes these power laws in the context of non-extensive statistics

  14. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  15. All-optical Hilbert transformer based on a single phase-shifted fiber Bragg grating: design and analysis.

    Science.gov (United States)

    Asghari, Mohammad H; Azaña, José

    2009-02-01

    A simple all-fiber design for implementing an all-optical temporal Hilbert transformer is proposed and numerically demonstrated. We show that an all-optical Hilbert transformer can be implemented using a uniform-period fiber Bragg grating (FBG) with a properly designed amplitude-only grating apodization profile incorporating a single pi phase shift in the middle of the grating length. All-optical Hilbert transformers capable of processing arbitrary optical waveforms with bandwidths up to a few hundreds of gigahertz can be implemented using feasible FBGs.

  16. Structure of Hilbert space operators

    CERN Document Server

    Jiang, Chunlan

    2006-01-01

    This book exposes the internal structure of non-self-adjoint operators acting on complex separable infinite dimensional Hilbert space, by analyzing and studying the commutant of operators. A unique presentation of the theorem of Cowen-Douglas operators is given. The authors take the strongly irreducible operator as a basic model, and find complete similarity invariants of Cowen-Douglas operators by using K -theory, complex geometry and operator algebra tools. Sample Chapter(s). Chapter 1: Background (153 KB). Contents: Jordan Standard Theorem and K 0 -Group; Approximate Jordan Theorem of Opera

  17. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  18. Design and development of telescope control system and software for the 50/80 cm Schmidt telescope

    Science.gov (United States)

    Kumar, T. S.; Banavar, R. N.

    2012-09-01

    In this paper, we describe the details of telescope controller design for the 50/80 cm Schmidt telescope at the Aryabhatta Research Institute of observational sciencES. The GUI based software for commanding the telescope is developed in Visual C++. The hardware architecture features a distributed network of microcontrollers over CAN. The basic functionality can also be implemented using the dedicated RS232 port per board. The controller is able to perform with negligible rms velocity errors. At fine speeds limit cycles are exhibited due to nonlinear friction. At speeds over 3.90 × 10-02 radians/sec, the PI controller performs with peak errors less than 1%.

  19. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  20. Explicit solution of Riemann-Hilbert problems for the Ernst equation

    Science.gov (United States)

    Klein, C.; Richter, O.

    1998-01-01

    Riemann-Hilbert problems are an important solution technique for completely integrable differential equations. They are used to introduce a free function in the solutions which can be used at least in principle to solve initial or boundary value problems. But even if the initial or boundary data can be translated into a Riemann-Hilbert problem, it is in general impossible to obtain explicit solutions. In the case of the Ernst equation, however, this is possible for a large class because the matrix problem can be shown to be gauge equivalent to a scalar one on a hyperelliptic Riemann surface that can be solved in terms of theta functions. As an example we discuss the rigidly rotating dust disk.

  1. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  2. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  3. Bound states and scattering in four-body systems

    International Nuclear Information System (INIS)

    Narodetsky, I.M.

    1979-01-01

    It is the purpose of this review to provide the clear and elementary introduction in the integral equation method and to demonstrate explicitely its usefulness for the physical applications. The existing results concerning the application of the integral equation technique for the four-nucleon bound states and scattering are reviewed.The treatment is based on the quasiparticle approach that permits the simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt theorem of the Fredholm integral equation theory. This paper contains the detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the 3 + 1 and 2 + 2 amplitudes which are the kernels of the four-body equations. The review contains essentially the discussion of the four-body quasiparticle equations and results obtained for bound states and scattering

  4. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  5. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  6. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  7. Der Mensch im Katastrophenuniversum. Zum Verhältnis von Historie, Naturgeschichte und Poetik im Frühwerk Arno Schmidts

    Directory of Open Access Journals (Sweden)

    Stepan Zbytovsky

    2012-01-01

    Full Text Available Among the representations of nature in Arno Schmidt's early texts from the debut novel Leviathan towards his radio features, scenes of diverse loci terribiles and destructive forces of nature take a prominent place. In several texts, natural processes and disasters are described as trigger or executor of the apocalypse. In analogy with the dual significance of the term ‘natural catastrophe’, which refers to both the extreme natural event itself and its impact on culture and civilisation, Schmidt linked scientific data with mythological and other cultural patterns of interpretation in these passages. Starting from the concept of nature as Leviathan, Schmidt's understanding of nature is examined, and shown to be one in which natural disasters are understood not as contingent accidents, but as defining moments of natural history. These are closely interwoven by Schmidt with culture and human history, and mirrored in his poetological programme. This article focuses on the connections between the three components, in the context of Germans coming to terms with the past and the discourse of cultural ecology (especially A. Goodbody, H. Zapf.

  8. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    Science.gov (United States)

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  9. An introduction of gauge field by the Lie-isotopic lifting of the Hilbert space

    International Nuclear Information System (INIS)

    Nishioka, M.

    1984-01-01

    It is introduced the gauge field by the Lie-isotopic lifting of the Hilbert space. Our method is different from other's in that the commutator between the isotropic element and the generators of the Lie algebra does not vanish in our case, but vanishes in other cases. Our method uses the Lie-isotopic lifting of the Hilbert space, but others do not use it

  10. Experimental Investigation of a Direct Methanol Fuel Cell with Hilbert Fractal Current Collectors

    Directory of Open Access Journals (Sweden)

    Jing-Yi Chang

    2014-01-01

    Full Text Available The Hilbert curve is a continuous type of fractal space-filling curve. This fractal curve visits every point in a square grid with a size of 2×2, 4×4, or any other power of two. This paper presents Hilbert fractal curve application to direct methanol fuel cell (DMFC current collectors. The current collectors are carved following first, second, and third order Hilbert fractal curves. These curves give the current collectors different free open ratios and opening perimeters. We conducted an experimental investigation into DMFC performance as a function of the free open ratio and opening perimeter on the bipolar plates. Nyquist plots of the bipolar plates are made and compared using electrochemical impedance spectroscopy (EIS experiments to understand the phenomena in depth. The results obtained in this paper could be a good reference for future current collector design.

  11. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  12. Means of Hilbert space operators

    CERN Document Server

    Hiai, Fumio

    2003-01-01

    The monograph is devoted to a systematic study of means of Hilbert space operators by a unified method based on the theory of double integral transformations and Peller's characterization of Schur multipliers. General properties on means of operators such as comparison results, norm estimates and convergence criteria are established. After some general theory, special investigations are focused on three one-parameter families of A-L-G (arithmetic-logarithmic-geometric) interpolation means, Heinz-type means and binomial means. In particular, norm continuity in the parameter is examined for such means. Some necessary technical results are collected as appendices.

  13. Extended Schmidt law holds for faint dwarf irregular galaxies

    Science.gov (United States)

    Roychowdhury, Sambit; Chengalur, Jayaram N.; Shi, Yong

    2017-12-01

    Context. The extended Schmidt law (ESL) is a variant of the Schmidt which relates the surface densities of gas and star formation, with the surface density of stellar mass added as an extra parameter. Although ESL has been shown to be valid for a wide range of galaxy properties, its validity in low-metallicity galaxies has not been comprehensively tested. This is important because metallicity affects the crucial atomic-to-molecular transition step in the process of conversion of gas to stars. Aims: We empirically investigate for the first time whether low metallicity faint dwarf irregular galaxies (dIrrs) from the local universe follow the ESL. Here we consider the "global" law where surface densities are averaged over the galactic discs. dIrrs are unique not only because they are at the lowest end of mass and star formation scales for galaxies, but also because they are metal-poor compared to the general population of galaxies. Methods: Our sample is drawn from the Faint Irregular Galaxy GMRT Survey (FIGGS) which is the largest survey of atomic hydrogen in such galaxies. The gas surface densities are determined using their atomic hydrogen content. The star formation rates are calculated using GALEX far ultraviolet fluxes after correcting for dust extinction, whereas the stellar surface densities are calculated using Spitzer 3.6 μm fluxes. The surface densities are calculated over the stellar discs defined by the 3.6 μm images. Results: We find dIrrs indeed follow the ESL. The mean deviation of the FIGGS galaxies from the relation is 0.01 dex, with a scatter around the relation of less than half that seen in the original relation. In comparison, we also show that the FIGGS galaxies are much more deviant when compared to the "canonical" Kennicutt-Schmidt relation. Conclusions: Our results help strengthen the universality of the ESL, especially for galaxies with low metallicities. We suggest that models of star formation in which feedback from previous generations

  14. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  15. Geometry of Gaussian quantum states

    International Nuclear Information System (INIS)

    Link, Valentin; Strunz, Walter T

    2015-01-01

    We study the Hilbert–Schmidt measure on the manifold of mixed Gaussian states in multi-mode continuous variable quantum systems. An analytical expression for the Hilbert–Schmidt volume element is derived. Its corresponding probability measure can be used to study typical properties of Gaussian states. It turns out that although the manifold of Gaussian states is unbounded, an ensemble of Gaussian states distributed according to this measure still has a normalizable distribution of symplectic eigenvalues, from which unitarily invariant properties can be obtained. By contrast, we find that for an ensemble of one-mode Gaussian states based on the Bures measure the corresponding distribution cannot be normalized. As important applications, we determine the distribution and the mean value of von Neumann entropy and purity for the Hilbert–Schmidt measure. (paper)

  16. Strong Completeness for Markovian Logics

    DEFF Research Database (Denmark)

    Kozen, Dexter; Mardare, Radu Iulian; Panangaden, Prakash

    2013-01-01

    In this paper we present Hilbert-style axiomatizations for three logics for reasoning about continuous-space Markov processes (MPs): (i) a logic for MPs defined for probability distributions on measurable state spaces, (ii) a logic for MPs defined for sub-probability distributions and (iii) a log...

  17. T^{\\sigma}_{\\rho}(G) Theories and Their Hilbert Series

    CERN Document Server

    Cremonesi, Stefano; Mekareeya, Noppadol; Zaffaroni, Alberto

    2015-01-01

    We give an explicit formula for the Higgs and Coulomb branch Hilbert series for the class of 3d N=4 superconformal gauge theories T^{\\sigma}_{\\rho}(G) corresponding to a set of D3 branes ending on NS5 and D5-branes, with or without O3 planes. Here G is a classical group, \\sigma is a partition of G and \\rho a partition of the dual group G^\\vee. In deriving such a formula we make use of the recently discovered formula for the Hilbert series of the quantum Coulomb branch of N=4 superconformal theories. The result can be expressed in terms of a generalization of a class of symmetric functions, the Hall-Littlewood polynomials, and can be interpreted in mathematical language in terms of localization. We mainly consider the case G=SU(N) but some interesting results are also given for orthogonal and symplectic groups.

  18. Introduction to Hilbert space and the theory of spectral multiplicity

    CERN Document Server

    Halmos, Paul R

    2017-01-01

    Concise introductory treatment consists of three chapters: The Geometry of Hilbert Space, The Algebra of Operators, and The Analysis of Spectral Measures. A background in measure theory is the sole prerequisite. 1957 edition.

  19. Probabilities of filaments in a Poissonian distribution of points -I

    International Nuclear Information System (INIS)

    Betancort-Rijo, J.

    1989-01-01

    Statistical techniques are devised to assess the likelihood of a Poisson sample of points in two and three dimensions, containing specific filamentary structures. For that purpose, the expression of Otto et al (1986. Astrophys. J., 304) for the probability density of clumps in a Poissonian distribution of points is generalized for any value of the density contrast. A way of counting filaments differing from that of Otto et al. is proposed, because at low density contrast the filaments counted by Otto et al. are distributed in a clumpy fashion, each clump of filaments corresponding to a distinct observed filament. (author)

  20. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  1. Velocity-Resolved LES (VR-LES) technique for simulating turbulent transport of high Schmidt number passive scalars

    Science.gov (United States)

    Verma, Siddhartha; Blanquart, Guillaume; P. K. Yeung Collaboration

    2011-11-01

    Accurate simulation of high Schmidt number scalar transport in turbulent flows is essential to studying pollutant dispersion, weather, and several oceanic phenomena. Batchelor's theory governs scalar transport in such flows, but requires further validation at high Schmidt and high Reynolds numbers. To this end, we use a new approach with the velocity field fully resolved, but the scalar field only partially resolved. The grid used is fine enough to resolve scales up to the viscous-convective subrange where the decaying slope of the scalar spectrum becomes constant. This places the cutoff wavenumber between the Kolmogorov scale and the Batchelor scale. The subgrid scale terms, which affect transport at the supergrid scales, are modeled under the assumption that velocity fluctuations are negligible beyond this cutoff wavenumber. To ascertain the validity of this technique, we performed a-priori testing on existing DNS data. This Velocity-Resolved LES (VR-LES) technique significantly reduces the computational cost of turbulent simulations of high Schmidt number scalars, and yet provides valuable information of the scalar spectrum in the viscous-convective subrange.

  2. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  3. How were the Hilbert-Einstein equations discovered?

    International Nuclear Information System (INIS)

    Logunov, Anatolii A; Mestvirishvili, Mirian A; Petrov, Vladimir A

    2004-01-01

    The ways in which Albert Einstein and David Hilbert independently arrived at the gravitational field equations are traced. A critical analysis is presented of a number of papers in which the history of the derivation of the equations is viewed in a way that 'radically differs from the standard point of view'. The conclusions of these papers are shown to be totally unfounded. (from the history of physics)

  4. The Einstein-Hilbert gravitation with minimum length

    Science.gov (United States)

    Louzada, H. L. C.

    2018-05-01

    We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.

  5. Hilbert Series and Mixed Branches of T[SU(N)] theories

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Federico [Departamento de Física Teórica and Instituto de Física Teórica UAM-CSIC,Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Hayashi, Hirotaka [Departamento de Física Teórica and Instituto de Física Teórica UAM-CSIC,Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Tokai University,4-1-1 Kitakaname, Hiratsuka, Kanagawa 259-1292 (Japan)

    2017-02-07

    We consider mixed branches of 3dN=4T[SU(N)] theory. We compute the Hilbert series of the Coulomb branch part of the mixed branch from a restriction rule acting on the Hilbert series of the full Coulomb branch that will truncate the magnetic charge summation only to the subset of BPS dressed monopole operators that arise in the Coulomb branch sublocus where the mixed branch stems. This restriction can be understood directly from the type IIB brane picture by a relation between the magnetic charges of the monopoles and brane position moduli. We also apply the restriction rule to the Higgs branch part of a given mixed branch by exploiting 3d mirror symmetry. Both cases show complete agreement with the results calculated by different methods.

  6. Tensor fields on orbits of quantum states and applications

    Energy Technology Data Exchange (ETDEWEB)

    Volkert, Georg Friedrich

    2010-07-19

    On classical Lie groups, which act by means of a unitary representation on finite dimensional Hilbert spaces H, we identify two classes of tensor field constructions. First, as pull-back tensor fields of order two from modified Hermitian tensor fields, constructed on Hilbert spaces by means of the property of having the vertical distributions of the C{sub 0}-principal bundle H{sub 0} {yields} P(H) over the projective Hilbert space P(H) in the kernel. And second, directly constructed on the Lie group, as left-invariant representation-dependent operator-valued tensor fields (LIROVTs) of arbitrary order being evaluated on a quantum state. Within the NP-hard problem of deciding whether a given state in a n-level bi-partite quantum system is entangled or separable (Gurvits, 2003), we show that both tensor field constructions admit a geometric approach to this problem, which evades the traditional ambiguity on defining metrical structures on the convex set of mixed states. In particular by considering manifolds associated to orbits passing through a selected state when acted upon by the local unitary group U(n) x U(n) of Schmidt coefficient decomposition inducing transformations, we find the following results: In the case of pure states we show that Schmidt-equivalence classes which are Lagrangian submanifolds define maximal entangled states. This implies a stronger statement as the one proposed by Bengtsson (2007). Moreover, Riemannian pull-back tensor fields split on orbits of separable states and provide a quantitative characterization of entanglement which recover the entanglement measure proposed by Schlienz and Mahler (1995). In the case of mixed states we highlight a relation between LIROVTs of order two and a class of computable separability criteria based on the Bloch-representation (de Vicente, 2007). (orig.)

  7. Tensor fields on orbits of quantum states and applications

    International Nuclear Information System (INIS)

    Volkert, Georg Friedrich

    2010-01-01

    On classical Lie groups, which act by means of a unitary representation on finite dimensional Hilbert spaces H, we identify two classes of tensor field constructions. First, as pull-back tensor fields of order two from modified Hermitian tensor fields, constructed on Hilbert spaces by means of the property of having the vertical distributions of the C 0 -principal bundle H 0 → P(H) over the projective Hilbert space P(H) in the kernel. And second, directly constructed on the Lie group, as left-invariant representation-dependent operator-valued tensor fields (LIROVTs) of arbitrary order being evaluated on a quantum state. Within the NP-hard problem of deciding whether a given state in a n-level bi-partite quantum system is entangled or separable (Gurvits, 2003), we show that both tensor field constructions admit a geometric approach to this problem, which evades the traditional ambiguity on defining metrical structures on the convex set of mixed states. In particular by considering manifolds associated to orbits passing through a selected state when acted upon by the local unitary group U(n) x U(n) of Schmidt coefficient decomposition inducing transformations, we find the following results: In the case of pure states we show that Schmidt-equivalence classes which are Lagrangian submanifolds define maximal entangled states. This implies a stronger statement as the one proposed by Bengtsson (2007). Moreover, Riemannian pull-back tensor fields split on orbits of separable states and provide a quantitative characterization of entanglement which recover the entanglement measure proposed by Schlienz and Mahler (1995). In the case of mixed states we highlight a relation between LIROVTs of order two and a class of computable separability criteria based on the Bloch-representation (de Vicente, 2007). (orig.)

  8. Hilbert W*-modules and coherent states

    International Nuclear Information System (INIS)

    Bhattacharyya, T; Roy, S Shyam

    2012-01-01

    Hilbert C*-module valued coherent states was introduced earlier by Ali, Bhattacharyya and Shyam Roy. We consider the case when the underlying C*-algebra is a W*-algebra. The construction is similar with a substantial gain. The associated reproducing kernel is now algebra valued, rather than taking values in the space of bounded linear operators between two C*-algebras. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)

  9. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  10. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  11. On Some Fractional Stochastic Integrodifferential Equations in Hilbert Space

    Directory of Open Access Journals (Sweden)

    Hamdy M. Ahmed

    2009-01-01

    Full Text Available We study a class of fractional stochastic integrodifferential equations considered in a real Hilbert space. The existence and uniqueness of the Mild solutions of the considered problem is also studied. We also give an application for stochastic integropartial differential equations of fractional order.

  12. The Hilbert Series of the One Instanton Moduli Space

    CERN Document Server

    Benvenuti, Sergio; Mekareeya, Noppadol; 10.1007

    2010-01-01

    The moduli space of k G-instantons on R^4 for a classical gauge group G is known to be given by the Higgs branch of a supersymmetric gauge theory that lives on Dp branes probing D(p + 4) branes in Type II theories. For p = 3, these (3 + 1) dimensional gauge theories have N = 2 supersymmetry and can be represented by quiver diagrams. The F and D term equations coincide with the ADHM construction. The Hilbert series of the moduli spaces of one instanton for classical gauge groups is easy to compute and turns out to take a particularly simple form which is previously unknown. This allows for a G invariant character expansion and hence easily generalisable for exceptional gauge groups, where an ADHM construction is not known. The conjectures for exceptional groups are further checked using some new techniques like sewing relations in Hilbert Series. This is applied to Argyres-Seiberg dualities.

  13. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  14. Roger Hayward and the Invention of the Two-Mirror Schmidt

    Science.gov (United States)

    Bell, T. E.

    2005-12-01

    Roger Hayward (1899-1979), now virtually unknown, was a multitalented architect, scientific illustrator, and optical inventor. Remembered primarily for illustrating Scientific American magazine's Amateur Scientist column between 1949 and 1974, he also illustrated more than a dozen textbooks in optics, physics, geology, oceanography, and chemistry, several of which became classics in their fields. He designed façades with astronomical themes for major buildings in Los Angeles, California, and sculpted mammoth, realistic models of the moon for Griffith Observatory, Adler Planetarium, and Disneyland. Throughout his life, he recreationally painted watercolors and oils that at least one critic likened to the work of John Singer Sargent. Hayward is least known as an optical designer, yet he made significant contributions to the DU spectrophotometer that established the multimillion-dollar company Beckman Instruments. During the pre-radar days of World War II at Mount Wilson Observatory, Hayward invented a classified Cassegrain version of the Schmidt telescope especially adapted for nighttime infrared aerial photography, plus extraordinarily simple machines that allowed inexperienced soldiers to grind, polish, and test accurate aspheric Schmidt correcting plates at speeds compatible with mass production - and later received U.S. patents for them all. This paper, drawn in part from unpublished letters between Hayward and Albert G. Ingalls, will feature little-known images of Hayward's work.

  15. Concerning the Hilbert 16th problem

    CERN Document Server

    Ilyashenko, Yu; Il'yashenko, Yu

    1995-01-01

    This book examines qualitative properties of vector fields in the plane, in the spirit of Hilbert's Sixteenth Problem. Two principal topics explored are bifurcations of limit cycles of planar vector fields and desingularization of singular points for individual vector fields and for analytic families of such fields. In addition to presenting important new developments in this area, this book contains an introductory paper which outlines the general context and describes connections between the papers in the volume. The book will appeal to researchers and graduate students working in the qualit

  16. The S-matrix for abstract scattering systems

    International Nuclear Information System (INIS)

    Amrein, W.O.; Pearson, D.B.

    1979-01-01

    Let S(lambda) be the S-matrix at energy lambda for an abstract scattering system. A bound is derived in terms of the interaction, on integrals of the form ∫ h(lambda)/S(lambda) - I/ 2 sub(HS) dlambda, where /./sub(HS) denotes the Hilbert-Schmidt norm. (Auth.)

  17. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  18. Ordering of ''ladder'' operators, the Wigner function for number and phase, and the enlarged Hilbert space

    International Nuclear Information System (INIS)

    Luks, A.; Perinova, V.

    1993-01-01

    A suitable ordering of phase exponential operators has been compared with the antinormal ordering of the annihilation and creation operators of a single mode optical field. The extended Wigner function for number and phase in the enlarged Hilbert space has been used for the derivation of the Wigner function for number and phase in the original Hilbert space. (orig.)

  19. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  20. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  1. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.

    2014-01-19

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  2. On the minimizers of calculus of variations problems in Hilbert spaces

    KAUST Repository

    Gomes, Diogo A.; Nurbekyan, Levon

    2014-01-01

    The objective of this paper is to discuss existence, uniqueness and regularity issues of minimizers of one dimensional calculus of variations problem in Hilbert spaces. © 2014 Springer-Verlag Berlin Heidelberg.

  3. A proposed physical analog for a quantum probability amplitude

    Science.gov (United States)

    Boyd, Jeffrey

    What is the physical analog of a probability amplitude? All quantum mathematics, including quantum information, is built on amplitudes. Every other science uses probabilities; QM alone uses their square root. Why? This question has been asked for a century, but no one previously has proposed an answer. We will present cylindrical helices moving toward a particle source, which particles follow backwards. Consider Feynman's book QED. He speaks of amplitudes moving through space like the hand of a spinning clock. His hand is a complex vector. It traces a cylindrical helix in Cartesian space. The Theory of Elementary Waves changes direction so Feynman's clock faces move toward the particle source. Particles follow amplitudes (quantum waves) backwards. This contradicts wave particle duality. We will present empirical evidence that wave particle duality is wrong about the direction of particles versus waves. This involves a paradigm shift; which are always controversial. We believe that our model is the ONLY proposal ever made for the physical foundations of probability amplitudes. We will show that our ``probability amplitudes'' in physical nature form a Hilbert vector space with adjoints, an inner product and support both linear algebra and Dirac notation.

  4. Kennicutt-Schmidt Relation Variety and Star-forming Cloud Fraction

    Energy Technology Data Exchange (ETDEWEB)

    Morokuma-Matsui, Kana [Chile Observatory, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka-shi, Tokyo 181-8588 (Japan); Muraoka, Kazuyuki, E-mail: kana.matsui@nao.ac.jp [Department of Physical Science, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)

    2017-03-10

    The observationally derived Kennicutt-Schmidt (KS) relation slopes differ from study to study, ranging from sublinear to superlinear. We investigate the KS-relation variety (slope and normalization) as a function of integrated intensity ratio, R {sub 31} = CO( J = 3–2)/CO( J = 1–0) using spatially resolved CO( J = 1–0), CO( J = 3–2), H i, H α, and 24 μ m data of three nearby spiral galaxies (NGC 3627, NGC 5055, and M83). We find that (1) the slopes for each subsample with a fixed R {sub 31} are shallower, but the slope for all data sets combined becomes steeper, (2) normalizations for high R {sub 31} subsamples tend to be high, (3) R {sub 31} correlates with star formation efficiency, therefore the KS relation depends on the distribution in R {sub 31}–Σ{sub gas} space of the samples: no Σ{sub gas} dependence of R {sub 31} results in a linear slope of the KS relation, whereas a positive correlation between Σ{sub gas} and R {sub 31} results in a superlinear slope of the KS relation, and (4) R {sub 31}–Σ{sub gas} distributions are different from galaxy to galaxy and within a galaxy: galaxies with prominent galactic structure tend to have large R {sub 31} and Σ{sub gas}. Our results suggest that the formation efficiency of a star-forming cloud from molecular gas is different among galaxies as well as within a galaxy, and it is one of the key factors inducing the variety in galactic KS relation.

  5. Extreme points of the convex set of joint probability distributions with ...

    Indian Academy of Sciences (India)

    Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.

  6. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  7. Monopole operators and Hilbert series of Coulomb branches of 3 d = 4 gauge theories

    Science.gov (United States)

    Cremonesi, Stefano; Hanany, Amihay; Zaffaroni, Alberto

    2014-01-01

    This paper addresses a long standing problem - to identify the chiral ring and moduli space (i.e. as an algebraic variety) on the Coulomb branch of an = 4 superconformal field theory in 2+1 dimensions. Previous techniques involved a computation of the metric on the moduli space and/or mirror symmetry. These methods are limited to sufficiently small moduli spaces, with enough symmetry, or to Higgs branches of sufficiently small gauge theories. We introduce a simple formula for the Hilbert series of the Coulomb branch, which applies to any good or ugly three-dimensional = 4 gauge theory. The formula counts monopole operators which are dressed by classical operators, the Casimir invariants of the residual gauge group that is left unbroken by the magnetic flux. We apply our formula to several classes of gauge theories. Along the way we make various tests of mirror symmetry, successfully comparing the Hilbert series of the Coulomb branch with the Hilbert series of the Higgs branch of the mirror theory.

  8. Frame transforms, star products and quantum mechanics on phase space

    International Nuclear Information System (INIS)

    Aniello, P; Marmo, G; Man'ko, V I

    2008-01-01

    Using the notions of frame transform and of square integrable projective representation of a locally compact group G, we introduce a class of isometries (tight frame transforms) from the space of Hilbert-Schmidt operators in the carrier Hilbert space of the representation into the space of square integrable functions on the direct product group G x G. These transforms have remarkable properties. In particular, their ranges are reproducing kernel Hilbert spaces endowed with a suitable 'star product' which mimics, at the level of functions, the original product of operators. A 'phase space formulation' of quantum mechanics relying on the frame transforms introduced in the present paper, and the link of these maps with both the Wigner transform and the wavelet transform are discussed

  9. Frames and generalized shift-invariant systems

    DEFF Research Database (Denmark)

    Christensen, Ole

    2004-01-01

    With motivation from the theory of Hilbert-Schmidt operators we review recent topics concerning frames in L 2 (R) and their duals. Frames are generalizations of orthonormal bases in Hilbert spaces. As for an orthonormal basis, a frame allows each element in the underlying Hilbert space...... to be written as an unconditionally convergent infinite linear combination of the frame elements; however, in contrast to the situation for a basis, the coefficients might not be unique. We present the basic facts from frame theory and the motivation for the fact that most recent research concentrates on tight...... frames or dual frame pairs rather than general frames and their canonical dual. The corresponding results for Gabor frames and wavelet frames are discussed in detail....

  10. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  11. Frequencies of digits, divergence points, and Schmidt games

    International Nuclear Information System (INIS)

    Olsen, L.

    2009-01-01

    Sets of divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequency of a given string of N-adic digits of x fails to exist, have recently attracted huge interest in the literature. In this paper we consider sets of simultaneous divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequencies of all strings of N-adic digits of x fail to exist. We show that many natural sets of simultaneous divergence points are (α, β)-wining sets in the sense of the Schmidt game. As an application we obtain lower bounds for the Hausdorff dimension of these sets.

  12. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  13. Coherence Properties of Strongly Interacting Atomic Vapors in Waveguides

    Science.gov (United States)

    2011-12-31

    lattice in the mean-field regime [22]. There the goal was to repeat, for our system, the Chirikiov-lzrailev program for Fermi- Pasta -Ulam chain and...define a typical deviation from ergodicity), we introduce a geometric structure— based on the Frobenius or Hilbert-Schmidt inner product—to the space of

  14. Matrix inequalities for the difference between arithmetic mean and harmonic mean

    OpenAIRE

    Liao, Wenshi; Wu, Junliang

    2015-01-01

    Motivated by the refinements and reverses of arithmetic-geometric mean and arithmetic-harmonic mean inequalities for scalars and matrices, in this article, we generalize the scalar and matrix inequalities for the difference between arithmetic mean and harmonic mean. In addition, relevant inequalities for the Hilbert-Schmidt norm and determinant are established.

  15. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  16. Joint probability distributions and fluctuation theorems

    International Nuclear Information System (INIS)

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  17. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  18. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  19. The probability distribution of intergranular stress corrosion cracking life for sensitized 304 stainless steels in high temperature, high purity water

    International Nuclear Information System (INIS)

    Akashi, Masatsune; Kenjyo, Takao; Matsukura, Shinji; Kawamoto, Teruaki

    1984-01-01

    In order to discuss the probability distribution of intergranular stress corrsion carcking life for sensitized 304 stainless steels, a series of the creviced bent beem (CBB) and the uni-axial constant load tests were carried out in oxygenated high temperature, high purity water. The following concludions were resulted; (1) The initiation process of intergranular stress corrosion cracking has been assumed to be approximated by the Poisson stochastic process, based on the CBB test results. (2) The probability distribution of intergranular stress corrosion cracking life may consequently be approximated by the exponential probability distribution. (3) The experimental data could be fitted to the exponential probability distribution. (author)

  20. Hilbert, Fock and Cantorian spaces in the quantum two-slit gedanken experiment

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2006-01-01

    On the one hand, a rigorous mathematical formulation of quantum mechanics requires the introduction of a Hilbert space and as we move to the second quantization, a Fock space. On the other hand, the Cantorian E-infinity approach to quantum physics was developed largely without any direct reference to the afore mentioned mathematical spaces. In the present work we utilize some novel reinterpretations of basic E (∞) Cantorian spacetime relations in terms of the Hilbert space of quantum mechanics. Proceeding in this way, we gain a better understanding of the physico-mathematical structure of quantum spacetime which is at the heart of the paradoxical and non-intuitive outcome of the famous quantum two-slit gedanken experiment

  1. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  2. On the issues of probability distribution of GPS carrier phase observations

    Science.gov (United States)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  3. Construction of unitary matrices from observable transition probabilities

    International Nuclear Information System (INIS)

    Peres, A.

    1989-01-01

    An ideal measuring apparatus defines an orthonormal basis vertical strokeu m ) in Hilbert space. Another apparatus defines another basis vertical strokeυ μ ). Both apparatuses together allow to measure the transition probabilities P mμ =vertical stroke(u m vertical strokeυ μ )vertical stroke 2 . The problem is: Given all the elements of a doubly stochastic matrix P mμ , find a unitary matrix U mμ such that P mμ =vertical strokeU mμ vertical stroke 2 . The number of unknown nontrivial phases is equal to the number of independent equations to satisfy. The problem can therefore be solved provided that the values of the P mμ satisfy some inequalities. (orig.)

  4. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    International Nuclear Information System (INIS)

    Xu, Xin-Ping; Ide, Yusuke

    2016-01-01

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  5. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn [School of Physical Science and Technology, Soochow University, Suzhou 215006 (China); Ide, Yusuke [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa, 221-8686 (Japan)

    2016-10-15

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  6. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  7. Probing the Rate-Determining Step of the Claisen-Schmidt Condensation by Competition Reactions

    Science.gov (United States)

    Mak, Kendrew K. W.; Chan, Wing-Fat; Lung, Ka-Ying; Lam, Wai-Yee; Ng, Weng-Cheong; Lee, Siu-Fung

    2007-01-01

    Competition experiments are a useful tool for preliminary study of the linear free energy relationship of organic reactions. This article describes a physical organic experiment for upper-level undergraduates to identify the rate-determining step of the Claisen-Schmidt condensation of benzaldehyde and acetophenone by studying the linear free…

  8. Cutting force response in milling of Inconel: analysis by wavelet and Hilbert-Huang Transforms

    Directory of Open Access Journals (Sweden)

    Grzegorz Litak

    Full Text Available We study the milling process of Inconel. By continuously increasing the cutting depth we follow the system response and appearance of oscillations of larger amplitude. The cutting force amplitude and frequency analysis has been done by means of wavelets and Hilbert-Huang transform. We report that in our system the force oscillations are closely related to the rotational motion of the tool and advocate for a regenerative mechanism of chatter vibrations. To identify vibrations amplitudes occurrence in time scale we apply wavelet and Hilbert-Huang transforms.

  9. On the discovery of the gravitational field equations by Einstein and Hilbert: new materials

    International Nuclear Information System (INIS)

    Vizgin, Vladimir P

    2001-01-01

    This article describes the history of discovery of the equations of gravitational field by Albert Einstein and David Hilbert in November 1915. The proof sheet of Hilbert's lecture report, made on 20 November 1915 and published in March 1916, rediscovered in 1997 in the archive of the university of Goettingen, throws new light on the history of this discovery. We also discuss the early history of the general theory of relativity that led to the expression of the general covariant equations of gravitational field. (from the history of physics)

  10. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  11. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  12. First attempt to study rock glaciers in New Zealand using the Schmidt-hammer - framework and preliminary results

    Science.gov (United States)

    Winkler, Stefan; Lambiel, Christophe; Sattler, Katrin; Büche, Thomas; Springer, Johanna

    2016-04-01

    Although not uncommon within the dryer eastern parts of the Southern Alps, New Zealand, comparatively few previous studies have previously focused on rock glacier dynamics and spatial distribution. Neither investigations of their chronological constraints nor any studies on actual rock glacier velocities have yet been carried out. Rock glaciers and periglacial processes still largely constitute a largely unexplored albeit potentially valuable field of research in the Southern Alps. The high-altitude valley head of Irishman Stream in the Ben Ohau Range between Lakes Ohau and Pukaki, roughly 30 km southeast of the Main Divide, contains a few morphologically intact rock glaciers and some appear to be active features (Sattler et al. 2016). Previous work focusing on the Late-glacial and early Holocene moraines in the valley head below the rock glaciers (Kaplan et al. 2010) provided 10Be-ages that could be utilised as fixed points for SHD (Schmidt-hammer exposure-age dating). Apart from detailed Schmidt-hammer sampling on the Late-glacial and early Holocene moraines, two altitudinal transects from the toe to their apex have been measured in detail on selected rock glaciers. On each of the multiple ridges of the rock glacier surface three sites of 50 boulders have been sampled with one impact each by the hammer (an N-type electronic SilverSchmidt by Proceq). Apart from getting some age constraints of these periglacial features in comparison to the well-dated moraines, the Schmidt-hammer measurements also had the aim to provide some insight into their genetic development resulting in a quite complex morphology of the rock glaciers and partial interaction with some of the moraines. Both altitudinal transects reveal a clear and continuous trend of increasing means (i.e. less weathered/younger exposure ages) towards their apex. The values for the individual ridges show, however, a transitional character with adjacent ridges albeit the abovementioned trend not statistically

  13. Novel microwave photonic fractional Hilbert transformer using a ring resonator-based optical all-pass filter.

    Science.gov (United States)

    Zhuang, Leimeng; Khan, Muhammad Rezaul; Beeker, Willem; Leinse, Arne; Heideman, René; Roeloffzen, Chris

    2012-11-19

    We propose and demonstrate a novel wideband microwave photonic fractional Hilbert transformer implemented using a ring resonator-based optical all-pass filter. The full programmability of the ring resonator allows variable and arbitrary fractional order of the Hilbert transformer. The performance analysis in both frequency and time domain validates that the proposed implementation provides a good approximation to an ideal fractional Hilbert transformer. This is also experimentally verified by an electrical S21 response characterization performed on a waveguide realization of a ring resonator. The waveguide-based structure allows the proposed Hilbert transformer to be integrated together with other building blocks on a photonic integrated circuit to create various system-level functionalities for on-chip microwave photonic signal processors. As an example, a circuit consisting of a splitter and a ring resonator has been realized which can perform on-chip phase control of microwave signals generated by means of optical heterodyning, and simultaneous generation of in-phase and quadrature microwave signals for a wide frequency range. For these functionalities, this simple and on-chip solution is considered to be practical, particularly when operating together with a dual-frequency laser. To our best knowledge, this is the first-time on-chip demonstration where ring resonators are employed to perform phase control functionalities for optical generation of microwave signals by means of optical heterodyning.

  14. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  15. A Bernstein-Von Mises Theorem for discrete probability distributions

    OpenAIRE

    Boucheron, S.; Gassiat, E.

    2008-01-01

    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0 on ℕ∖{0} and a sequence of truncation levels (kn)n satisfying kn3≤ninf i≤knθ0(i). Let θ̂ denote the maximum likelihood estimate of (θ0(i))i≤kn and let Δn(θ0) denote the kn-dimensional vector which i-th coordinate is defined by $\\sqrt{n}(\\hat{\\theta}_{n}(i)-\\theta_{0}(i))$ for 1≤i≤kn. We check that under mild ...

  16. A Proof of the Hilbert-Smith Conjecture

    OpenAIRE

    McAuley, Louis F.

    2001-01-01

    The Hilbert-Smith Conjecture states that if G is a locally compact group which acts effectively on a connected manifold as a topological transformation group, then G is a Lie group. A rather straightforward proof of this conjecture is given. The motivation is work of Cernavskii (``Finite-to-one mappings of manifolds'', Trans. of Math. Sk. 65 (107), 1964.) His work is generalized to the orbit map of an effective action of a p-adic group on compact connected n-manifolds with the aid of some new...

  17. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  18. Strong semiclassical approximation of Wigner functions for the Hartree dynamics

    KAUST Repository

    Athanassoulis, Agissilaos; Paul, Thierry; Pezzotti, Federica; Pulvirenti, Mario

    2011-01-01

    We consider the Wigner equation corresponding to a nonlinear Schrödinger evolution of the Hartree type in the semiclassical limit h → 0. Under appropriate assumptions on the initial data and the interaction potential, we show that the Wigner function is close in L 2 to its weak limit, the solution of the corresponding Vlasov equation. The strong approximation allows the construction of semiclassical operator-valued observables, approximating their quantum counterparts in Hilbert-Schmidt topology. The proof makes use of a pointwise-positivity manipulation, which seems necessary in working with the L 2 norm and the precise form of the nonlinearity. We employ the Husimi function as a pivot between the classical probability density and the Wigner function, which - as it is well known - is not pointwise positive in general.

  19. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  20. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  1. A note on high Schmidt number laminar buoyant jets discharged horizontally

    International Nuclear Information System (INIS)

    Dewan, A.; Arakeri, J.H.; Srinivasan, J.

    1992-01-01

    This paper reports on a new model, developed for the integral analysis of high Schmidt number (or equivalently high Prandtl number) laminar buoyant jets discharged horizontally. This model assumes top-hat density profile across the inner core of jet and Gaussian velocity profile. Entrainment coefficient corresponding to pure laminar jet has been taken in the analysis. The prediction of the jet trajectory agree well with experimental data in the regions where the jet remains laminar

  2. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  3. Critical Assessment Of The Issues In The Application Of Hilbert Transform To Compute The Logarithmic Decrement

    OpenAIRE

    Majewski M.; Magalas L.B.

    2015-01-01

    The parametric OMI (Optimization in Multiple Intervals), the Yoshida-Magalas (YM) and a novel Hilbert-twin (H-twin) methods are advocated for computing the logarithmic decrement in the field of internal friction and mechanical spectroscopy of solids. It is shown that dispersion in experimental points results mainly from the selection of the computing methods, the number of oscillations, and noise. It is demonstrated that conventional Hilbert transform method suffers from high dispersion in in...

  4. Hilbert space representation of the SOq(N)-covariant Heisenberg algebra

    International Nuclear Information System (INIS)

    Hebecker, A.; Weich, W.

    1993-01-01

    The differential calculus on SO q (N)-covariant quantum planes is rewritten in polar co-ordinates. Thus a Hilbert space formulation of q-deformed quantum mechanics can be developed particularly suitable for spherically symmetric potentials. The simplest case of a free particle is solved showing a discrete energy spectrum. (orig.)

  5. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  6. Methods for detection and characterization of signals in noisy data with the Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Stroeer, Alexander; Cannizzo, John K.; Camp, Jordan B.; Gagarin, Nicolas

    2009-01-01

    The Hilbert-Huang transform is a novel, adaptive approach to time series analysis that does not make assumptions about the data form. Its adaptive, local character allows the decomposition of nonstationary signals with high time-frequency resolution but also renders it susceptible to degradation from noise. We show that complementing the Hilbert-Huang transform with techniques such as zero-phase filtering, kernel density estimation and Fourier analysis allows it to be used effectively to detect and characterize signals with low signal-to-noise ratios.

  7. Scattering integral equations and four nucleon problem

    International Nuclear Information System (INIS)

    Narodetskii, I.M.

    1980-01-01

    Existing results from the application of integral equation technique to the four-nucleon bound states and scattering are reviewed. The first numerical calculations of the four-body integral equations have been done ten years ago. Yet, it is still widely believed that these equations are too complicated to solve numerically. The purpose of this review is to provide a clear and elementary introduction in the integral equation method and to demonstrate its usefulness in physical applications. The presentation is based on the quasiparticle approach. This permits a simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt method of the Fredholm integral equation theory. The first part of this review contains a detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the kernel of the four-body equations. The second part contains the discussion of the four-body quasiparticle equations and of the resed forullts obtain bound states and scattering

  8. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  9. A two-step Hilbert transform method for 2D image reconstruction

    International Nuclear Information System (INIS)

    Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D

    2004-01-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained

  10. PHOTOMETRIC REDSHIFT PROBABILITY DISTRIBUTIONS FOR GALAXIES IN THE SDSS DR8

    International Nuclear Information System (INIS)

    Sheldon, Erin S.; Cunha, Carlos E.; Mandelbaum, Rachel; Brinkmann, J.; Weaver, Benjamin A.

    2012-01-01

    We present redshift probability distributions for galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 8 imaging data. We used the nearest-neighbor weighting algorithm to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8 and u < 29.0. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five-dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training-set redshifts. We derived P(z)'s for individual objects by using training-set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release. The newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary sources of error in the N(z) reconstruction to be sample variance and spectroscopic failures: The training sets are drawn from relatively small volumes of space, and some samples have large incompleteness. Using simulations we estimated the uncertainty in N(z) due to sample variance at a given redshift to be ∼10%-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS Web site.

  11. 6th Hilbert's problem and S.Lie's infinite groups

    International Nuclear Information System (INIS)

    Konopleva, N.P.

    1999-01-01

    The progress in Hilbert's sixth problem solving is demonstrated. That became possible thanks to the gauge field theory in physics and to the geometrical treatment of the gauge fields. It is shown that the fibre bundle spaces geometry is the best basis for solution of the problem being discussed. This talk has been reported at the International Seminar '100 Years after Sophus Lie' (Leipzig, Germany)

  12. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  13. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  14. Generalized Polar Decompositions for Closed Operators in Hilbert Spaces and Some Applications

    OpenAIRE

    Gesztesy, Fritz; Malamud, Mark; Mitrea, Marius; Naboko, Serguei

    2008-01-01

    We study generalized polar decompositions of densely defined, closed linear operators in Hilbert spaces and provide some applications to relatively (form) bounded and relatively (form) compact perturbations of self-adjoint, normal, and m-sectorial operators.

  15. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  16. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...

  17. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  18. An advanced complex analysis problem book topological vector spaces, functional analysis, and Hilbert spaces of analytic functions

    CERN Document Server

    Alpay, Daniel

    2015-01-01

    This is an exercises book at the beginning graduate level, whose aim is to illustrate some of the connections between functional analysis and the theory of functions of one variable. A key role is played by the notions of positive definite kernel and of reproducing kernel Hilbert space. A number of facts from functional analysis and topological vector spaces are surveyed. Then, various Hilbert spaces of analytic functions are studied.

  19. Theory and experiments on Peano and Hilbert curve RFID tags

    Science.gov (United States)

    McVay, John; Hoorfar, Ahmad; Engheta, Nader

    2006-05-01

    Recently, there has been considerable interest in the area of Radio Frequency Identification (RFID) and Radio Frequency Tagging (RFTAG). This emerging area of interest can be applied for inventory control (commercial) as well as friend/foe identification (military) to name but a few. The current technology can be broken down into two main groups, namely passive and active RFID tags. Utilization of Space-Filling Curve (SFC) geometries, such as the Peano and Hilbert curves, has been recently investigated for use in completely passive RFID applications [1, 2]. In this work, we give an overview of our work on the space-filling curves and the potential for utilizing the electrically small, resonant characteristics of these curves for use in RFID technologies with an emphasis on the challenging issues involved when attempting to tag conductive objects. In particular, we investigate the possible use of these tags in conjunction with high impedance ground-planes made of Hilbert or Peano curve inclusions [3, 4] to develop electrically small RFID tags that may also radiate efficiently, within close proximity of large conductive objects [5].

  20. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  1. Diagonalization of Bounded Linear Operators on Separable Quaternionic Hilbert Space

    International Nuclear Information System (INIS)

    Feng Youling; Cao, Yang; Wang Haijun

    2012-01-01

    By using the representation of its complex-conjugate pairs, we have investigated the diagonalization of a bounded linear operator on separable infinite-dimensional right quaternionic Hilbert space. The sufficient condition for diagonalizability of quaternionic operators is derived. The result is applied to anti-Hermitian operators, which is essential for solving Schroedinger equation in quaternionic quantum mechanics.

  2. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  3. Probability Distribution and Projected Trends of Daily Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER

    2013-01-01

    Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.

  4. Empirical mode decomposition and Hilbert transforms for analysis of oil-film interferograms

    International Nuclear Information System (INIS)

    Chauhan, Kapil; Ng, Henry C H; Marusic, Ivan

    2010-01-01

    Oil-film interferometry is rapidly becoming the preferred method for direct measurement of wall shear stress in studies of wall-bounded turbulent flows. Although being widely accepted as the most accurate technique, it does have inherent measurement uncertainties, one of which is associated with determining the fringe spacing. This is the focus of this paper. Conventional analysis methods involve a certain level of user input and thus some subjectivity. In this paper, we consider empirical mode decomposition (EMD) and the Hilbert transform as an alternative tool for analyzing oil-film interferograms. In contrast to the commonly used Fourier-based techniques, this new method is less subjective and, as it is based on the Hilbert transform, is superior for treating amplitude and frequency modulated data. This makes it particularly robust to wide differences in the quality of interferograms

  5. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  6. Thailand in the Face of the 1997 Asian Crisis and the Current Financial Crisis: An Interview With Johannes Dragsbæk Schmidt

    Directory of Open Access Journals (Sweden)

    Julia Scharinger

    2010-01-01

    Full Text Available Johannes Dragsbæk Schmidt studied International Relations and Development Studies at Aalborg University, Denmark. Since 1993, he has been Associate Professor in the Department of History, International and Social Studies. Prof Dragsbæk Schmidt has held visiting research fellowships in Australia, Thailand, Malaysia, Indonesia, the Philippines and Poland, and was a Visiting Professor at the Institute for Political Economy, Carleton University, Canada in 2009. Additionally he has been a consultant to UNESCO, the World Bank and the Irish Development Agency. Prof Dragsbæk Schmidt has a broad spectrum of research interests, varying from globalisation and international division of labour via refugees and human rights to social and welfare policy and state regulations with a focus on East and South-East Asia. / The interview was conducted by e-mail on 3 April, 27 April and 4 May 2010.

  7. Heterotic reduction of Courant algebroid connections and Einstein–Hilbert actions

    Energy Technology Data Exchange (ETDEWEB)

    Jurčo, Branislav, E-mail: jurco@karlin.mff.cuni.cz [Mathematical Institute, Faculty of Mathematics and Physics, Charles University, Prague 186 75 (Czech Republic); Vysoký, Jan, E-mail: vysoky@math.cas.cz [Institute of Mathematics of the Czech Academy of Sciences, Žitná 25, Prague 115 67 (Czech Republic); Mathematical Sciences Institute, Australian National University, Acton ACT 2601 (Australia)

    2016-08-15

    We discuss Levi-Civita connections on Courant algebroids. We define an appropriate generalization of the curvature tensor and compute the corresponding scalar curvatures in the exact and heterotic case, leading to generalized (bosonic) Einstein–Hilbert type of actions known from supergravity. In particular, we carefully analyze the process of the reduction for the generalized metric, connection, curvature tensor and the scalar curvature.

  8. Heterotic reduction of Courant algebroid connections and Einstein–Hilbert actions

    International Nuclear Information System (INIS)

    Jurčo, Branislav; Vysoký, Jan

    2016-01-01

    We discuss Levi-Civita connections on Courant algebroids. We define an appropriate generalization of the curvature tensor and compute the corresponding scalar curvatures in the exact and heterotic case, leading to generalized (bosonic) Einstein–Hilbert type of actions known from supergravity. In particular, we carefully analyze the process of the reduction for the generalized metric, connection, curvature tensor and the scalar curvature.

  9. Analysis of the Cofrentes instability with the Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Blazquez, J.; Galindo, A.

    2010-01-01

    The most obvious application of the Hilbert-Huang transform is the denoising (signal isolation). In this article, the dynamic system is the power of a BWR reactor that undergoes instability. The signal and the dynamic systems are described, which in this case corresponds to a current incident in a commercial BWR reactor (Cofrentes). Finally, empirical modes are calculated and the results are analyzed.

  10. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  11. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  12. Scarred resonances and steady probability distribution in a chaotic microcavity

    International Nuclear Information System (INIS)

    Lee, Soo-Young; Rim, Sunghwan; Kim, Chil-Min; Ryu, Jung-Wan; Kwon, Tae-Yoon

    2005-01-01

    We investigate scarred resonances of a stadium-shaped chaotic microcavity. It is shown that two components with different chirality of the scarring pattern are slightly rotated in opposite ways from the underlying unstable periodic orbit, when the incident angles of the scarring pattern are close to the critical angle for total internal reflection. In addition, the correspondence of emission pattern with the scarring pattern disappears when the incident angles are much larger than the critical angle. The steady probability distribution gives a consistent explanation about these interesting phenomena and makes it possible to expect the emission pattern in the latter case

  13. Loaded dice in Monte Carlo : importance sampling in phase space integration and probability distributions for discrepancies

    NARCIS (Netherlands)

    Hameren, Andreas Ferdinand Willem van

    2001-01-01

    Discrepancies play an important role in the study of uniformity properties of point sets. Their probability distributions are a help in the analysis of the efficiency of the Quasi Monte Carlo method of numerical integration, which uses point sets that are distributed more uniformly than sets of

  14. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  15. Von Neuman representations on self-dual Hilbert W* moduli

    International Nuclear Information System (INIS)

    Frank, M.

    1987-01-01

    Von Neumann algebras M of bounded operators on self-dual Hilbert W* moduli H possessing a cyclic-separating element x-bar in H are considered. The close relation of them to certain real subspaces of H is established. Under the supposition that the underlying W*-algebra is commutative, a Tomita-Takesaki type theorem is stated. The natural cone in H arising from the pair (M, x-bar) is investigated and its properties are obtained

  16. Infinite conformal symmetries and Riemann-Hilbert transformation in super principal chiral model

    International Nuclear Information System (INIS)

    Hao Sanru; Li Wei

    1989-01-01

    This paper shows a new symmetric transformation - C transformation in super principal chiral model and discover an infinite dimensional Lie algebra related to the Virasoro algebra without central extension. By using the Riemann-Hilbert transformation, the physical origination of C transformation is discussed

  17. Rosette of rosettes of Hilbert spaces in the indefinite metric state space of the quantized Maxwell field

    International Nuclear Information System (INIS)

    Gessner, W.; Ernst, V.

    1980-01-01

    The indefinite metric space O/sub M/ of the covariant form of the quantized Maxwell field M is analyzed in some detail. S/sub M/ contains not only the pre-Hilbert space X 0 of states of transverse photons which occurs in the Gupta--Bleuler formalism of the free M, but a whole rosette of continuously many, isomorphic, complete, pre-Hilbert spaces L/sup q/ disjunct up to the zero element o of S/sub M/. The L/sup q/ are the maximal subspaces of S/sub M/ which allow the usual statistical interpretation. Each L/sup q/ corresponds uniquely to one square integrable, spatial distribution j/sup o/(x) of the total charge Q=0. If M is in any state from L/sup q/, the bare charge j 0 (x) appears to be inseparably dressed by the quantum equivalent of its proper, classical Coulomb field E(x). The vacuum occurs only in the state space L 0 of the free Maxwell field. Each L/sup q/ contains a secondary rosette of continuously many, up to o disjunct, isomorphic Hilbert spaces H/sub g//sup q/ related to different electromagnetic gauges. The space H/sub o//sup q/, which corresponds to the Coulomb gauge within the Lorentz gauge, plays a physically distinguished role in that only it leads to the usual concept of energy. If M is in any state from H/sub g//sup q/, the bare 4-current j 0 (x), j(x), where j(x) is any square integrable, transverse current density in space, is endowed with its proper 4-potential which depends on the chosen gauge, and with its proper, gauge independent, Coulomb--Oersted field E(x), B(x). However, these fields exist only in the sense of quantum mechanical expectation values equipped with the corresponding field fluctuations. So they are basically different from classical electromagnetic fields

  18. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  19. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  20. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B

    2007-01-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed

  1. Quantum mechanics in an evolving Hilbert space

    Science.gov (United States)

    Artacho, Emilio; O'Regan, David D.

    2017-03-01

    Many basis sets for electronic structure calculations evolve with varying external parameters, such as moving atoms in dynamic simulations, giving rise to extra derivative terms in the dynamical equations. Here we revisit these derivatives in the context of differential geometry, thereby obtaining a more transparent formalization, and a geometrical perspective for better understanding the resulting equations. The effect of the evolution of the basis set within the spanned Hilbert space separates explicitly from the effect of the turning of the space itself when moving in parameter space, as the tangent space turns when moving in a curved space. New insights are obtained using familiar concepts in that context such as the Riemann curvature. The differential geometry is not strictly that for curved spaces as in general relativity, a more adequate mathematical framework being provided by fiber bundles. The language used here, however, will be restricted to tensors and basic quantum mechanics. The local gauge implied by a smoothly varying basis set readily connects with Berry's formalism for geometric phases. Generalized expressions for the Berry connection and curvature are obtained for a parameter-dependent occupied Hilbert space spanned by nonorthogonal Wannier functions. The formalism is applicable to basis sets made of atomic-like orbitals and also more adaptative moving basis functions (such as in methods using Wannier functions as intermediate or support bases), but should also apply to other situations in which nonorthogonal functions or related projectors should arise. The formalism is applied to the time-dependent quantum evolution of electrons for moving atoms. The geometric insights provided here allow us to propose new finite-difference time integrators, and also better understand those already proposed.

  2. Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions

    CSIR Research Space (South Africa)

    Dove, Albert

    2017-01-01

    Full Text Available maintaining constraints in a DC-DC converter is investigated. A probability distribution whose aim is to ensure maximal harmonic spreading and yet mainaint constraints is presented. The PDFs are determined from a direct application of the method of Maximum...

  3. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    Science.gov (United States)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  4. Lagrangian single-particle turbulent statistics through the Hilbert-Huang transform.

    Science.gov (United States)

    Huang, Yongxiang; Biferale, Luca; Calzavarini, Enrico; Sun, Chao; Toschi, Federico

    2013-04-01

    The Hilbert-Huang transform is applied to analyze single-particle Lagrangian velocity data from numerical simulations of hydrodynamic turbulence. The velocity trajectory is described in terms of a set of intrinsic mode functions C(i)(t) and of their instantaneous frequency ω(i)(t). On the basis of this decomposition we define the ω-conditioned statistical moments of the C(i) modes, named q-order Hilbert spectra (HS). We show that such quantities have enhanced scaling properties as compared to traditional Fourier transform- or correlation-based (structure functions) statistical indicators, thus providing better insights into the turbulent energy transfer process. We present clear empirical evidence that the energylike quantity, i.e., the second-order HS, displays a linear scaling in time in the inertial range, as expected from a dimensional analysis. We also measure high-order moment scaling exponents in a direct way, without resorting to the extended self-similarity procedure. This leads to an estimate of the Lagrangian structure function exponents which are consistent with the multifractal prediction in the Lagrangian frame as proposed by Biferale et al. [Phys. Rev. Lett. 93, 064502 (2004)].

  5. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  6. Unstable quantum states and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Gorini, V.; Parravicini, G.

    1978-10-01

    Rigged Hilbert space techniques are applied to the quantum mechanical treatment of unstable states in nonrelativistic scattering theory. A method is discussed which is based on representations of decay amplitudes in terms of expansions over complete sets of generalized eigenvectors of the interacting Hamiltonian, corresponding to complex eigenvalues. These expansions contain both a discrete and a continuum contribution. The former corresponds to eigenvalues located at the second sheet poles of the S matrix, and yields the exponential terms in the survival amplitude. The latter arises from generalized eigenvectors associated to complex eigenvalues on background contours in the complex plane, and gives the corrections to the exponential law. 27 references

  7. Quantum holonomy theory and Hilbert space representations

    Energy Technology Data Exchange (ETDEWEB)

    Aastrup, Johannes [Mathematisches Institut, Universitaet Hannover (Germany); Moeller Grimstrup, Jesper [QHT Gruppen, Copenhagen Area (Denmark)

    2016-11-15

    We present a new formulation of quantum holonomy theory, which is a candidate for a non-perturbative and background independent theory of quantum gravity coupled to matter and gauge degrees of freedom. The new formulation is based on a Hilbert space representation of the QHD(M) algebra, which is generated by holonomy-diffeomorphisms on a 3-dimensional manifold and by canonical translation operators on the underlying configuration space over which the holonomy-diffeomorphisms form a non-commutative C*-algebra. A proof that the state that generates the representation exist is left for later publications. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. Probability of primordial black hole pair creation in a modified gravitational theory

    International Nuclear Information System (INIS)

    Paul, B. C.; Paul, Dilip

    2006-01-01

    We compute the probability for quantum creation of an inflationary universe with and without a pair of black holes in a modified gravity. The action of the modified theory of gravity contains αR 2 and δR -1 terms in addition to a cosmological constant (Λ) in the Einstein-Hilbert action. The probabilities for the creation of universe with a pair of black holes have been evaluated considering two different kinds of spatial sections, one which accommodates a pair of black holes and the other without black hole. We adopt a technique prescribed by Bousso and Hawking to calculate the above creation probability in a semiclassical approximation using the Hartle-Hawking boundary condition. We note a class of new and physically interesting instanton solutions characterized by the parameters in the action. These instantons may play an important role in the creation of the early universe. We also note that the probability of creation of a universe with a pair of black holes is strongly suppressed with a positive cosmological constant when δ=(4Λ 2 /3) for α>0 but it is more probable for α<-(1/6Λ). In the modified gravity considered here instanton solutions are permitted even without a cosmological constant when one begins with a negative δ

  9. Design and performance of axes controller for the 50/80 cm ARIES Schmidt telescope

    Science.gov (United States)

    Kumar, T. S.; Banwar, R. N.

    We describe here the details of R.A. and Dec axes controller for the 50/80 cm Schmidt telescope at Aryabhatta Research Institute of observational sciencES (ARIES). Each axis is driven by a set of two motors for backlash-free motion and is coupled to on-shaft encoder for absolute position measurements. Additional incremental encoders are provided though a backlash-free reduction for velocity feedback. A pulse width modulation (PWM) based proportional and integral (PI) controller is designed to drive the twin-motor drive of each axis. The overall telescope control architecture features a distributed network of simple low cost PIC microcontrollers interfaced via CAN bus and RS232 ports. Using this controller it has been observed that the rms velocity errors at slew, set, guide, fine and tracking speeds are negligible. Excessive preload on the gearbox bearings results in a highly nonlinear behavior at fine speeds owing to dynamics of friction. We found that the peak errors in the tracking performance and fine speeds can be improved by properly adjusting the preloads on the gearbox bearings.

  10. Quantum game theory based on the Schmidt decomposition

    International Nuclear Information System (INIS)

    Ichikawa, Tsubasa; Tsutsui, Izumi; Cheon, Taksu

    2008-01-01

    We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree

  11. Physical characterization of a watershed through GIS: a study in the Schmidt stream, Brazil.

    Science.gov (United States)

    Reis, D R; Plangg, R; Tundisi, J G; Quevedo, D M

    2015-12-01

    Remote sensing and geoprocessing are essential tools for obtaining and maintaining records of human actions on space over the course of time; these tools offer the basis for diagnoses of land use, environmental interference and local development. The Schmidt stream watershed, located in the Sinos River basin, in southern Brazil, has an environmental situation similar to that of the majority of small streams draining rural and urban areas in southern Brazil: agricultural and urbanization practices do not recognize the riparian area and there is removal of original vegetation, disregarding the suitability of land use; removal of wetlands; intensive water use for various activities; and lack of control and monitoring in the discharge of wastewater, among other factors, deteriorate the quality of this important environment.This article aims to achieve a physical characterization of the Schmidt stream watershed (Sinos river basin) identifying elements such as land use and occupation, soil science, geology, climatology, extent and location of watershed, among others, so as to serve as the basis for a tool that helps in the integrated environmental management of watersheds. By applying geographic information system - GIS to the process of obtaining maps of land use and occupation, pedologicaland geological, and using climatological data from the Campo Bom meteorological station, field visit, review of literature and journals, and publicly available data, the physical characterization of the Schmidt stream watershed was performed, with a view to the integrated environmental management of this watershed. Out of the total area of the Schmidt stream watershed (23.92 km(2)), in terms of geology, it was observed that 23.7% consist of colluvial deposits, 22.6% consist of grass facies, and 53.7% consist of Botucatu formation. Major soil types of the watershed: 97.4% Argisols and only 2.6% Planosols. Land use and occupation is characterized by wetland (0.5%), Native Forest (12

  12. Probability density function of a puff dispersing from the wall of a turbulent channel

    Science.gov (United States)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  13. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  14. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1990-12-01

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  15. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, M S

    1990-12-15

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  16. The classes of the quasihomogeneous Hilbert schemes of points on the plane

    NARCIS (Netherlands)

    Buryak, A.

    2012-01-01

    Abstract: In this paper we give a formula for the classes (in the Grothendieck ring of complex quasi-projective varieties) of irreducible components of -quasi-homogeneous Hilbert schemes of points on the plane. We find a new simple geometric interpretation of the -Catalan numbers. Finally, we

  17. From Kant to Hilbert a source book in the foundations of mathematics

    CERN Document Server

    Ewald, William Bragg

    1996-01-01

    This two-volume work brings together a comprehensive selection of mathematical works from the period 1707-1930. During this time the foundations of modern mathematics were laid, and From Kant to Hilbert provides an overview of the foundational work in each of the main branches of mathmeatics with narratives showing how they were linked. Now available as a separate volume. - ;Immanuel Kant''s Critique of Pure Reason is widely taken to be the starting point of the modern period of mathematics while David Hilbert was the last great mainstream mathematician to pursue important nineteenth cnetury ideas. This two-volume work provides an overview of this important era of mathematical research through a carefully chosen selection of articles. They provide an insight into the foundations of each of the main branches of mathematics--algebra, geometry, number. theory, analysis, logic and set theory--with narratives to show how they are linked. Classic works by Bolzano, Riemann, Hamilton, Dedekind, and Poincare are repro...

  18. Schmidt-Kalman Filter with Polynomial Chaos Expansion for Orbit Determination of Space Objects

    Science.gov (United States)

    Yang, Y.; Cai, H.; Zhang, K.

    2016-09-01

    Parameter errors in orbital models can result in poor orbit determination (OD) using a traditional Kalman filter. One approach to account for these errors is to consider them in the so-called Schmidt-Kalman filter (SKF), by augmenting the state covariance matrix (CM) with additional parameter covariance rather than additively estimating these so-called "consider" parameters. This paper introduces a new SKF algorithm with polynomial chaos expansion (PCE-SKF). The PCE approach has been proved to be more efficient than Monte Carlo method for propagating the input uncertainties onto the system response without experiencing any constraints of linear dynamics, or Gaussian distributions of the uncertainty sources. The state and covariance needed in the orbit prediction step are propagated using PCE. An inclined geosynchronous orbit scenario is set up to test the proposed PCE-SKF based OD algorithm. The satellite orbit is propagated based on numerical integration, with the uncertain coefficient of solar radiation pressure considered. The PCE-SKF solutions are compared with extended Kalman filter (EKF), SKF and PCE-EKF (EKF with PCE) solutions. It is implied that the covariance propagation using PCE leads to more precise OD solutions in comparison with those based on linear propagation of covariance.

  19. A High-Resolution Demodulation Algorithm for FBG-FP Static-Strain Sensors Based on the Hilbert Transform and Cross Third-Order Cumulant

    Directory of Open Access Journals (Sweden)

    Wenzhu Huang

    2015-04-01

    Full Text Available Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs. However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs. The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs’ reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method.

  20. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  1. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  2. Distributional Watson transforms

    NARCIS (Netherlands)

    Dijksma, A.; Snoo, H.S.V. de

    1974-01-01

    For all Watson transforms W in L2(R+) a triple of Hilbert space LG ⊂ L2(R+) ⊂ L'G is constructed such that W may be extended to L'G. These results allow the construction of a triple L ⊂ L2(R+) ⊂ L', where L is a Gelfand-Fréchet space. This leads to a theory of distributional Watson transforms.

  3. From axiomatics of quantum probability to modelling geological uncertainty and management of intelligent hydrocarbon reservoirs with the theory of open quantum systems

    Science.gov (United States)

    Lozada Aguilar, Miguel Ángel; Khrennikov, Andrei; Oleschko, Klaudia

    2018-04-01

    As was recently shown by the authors, quantum probability theory can be used for the modelling of the process of decision-making (e.g. probabilistic risk analysis) for macroscopic geophysical structures such as hydrocarbon reservoirs. This approach can be considered as a geophysical realization of Hilbert's programme on axiomatization of statistical models in physics (the famous sixth Hilbert problem). In this conceptual paper, we continue development of this approach to decision-making under uncertainty which is generated by complexity, variability, heterogeneity, anisotropy, as well as the restrictions to accessibility of subsurface structures. The belief state of a geological expert about the potential of exploring a hydrocarbon reservoir is continuously updated by outputs of measurements, and selection of mathematical models and scales of numerical simulation. These outputs can be treated as signals from the information environment E. The dynamics of the belief state can be modelled with the aid of the theory of open quantum systems: a quantum state (representing uncertainty in beliefs) is dynamically modified through coupling with E; stabilization to a steady state determines a decision strategy. In this paper, the process of decision-making about hydrocarbon reservoirs (e.g. `explore or not?'; `open new well or not?'; `contaminated by water or not?'; `double or triple porosity medium?') is modelled by using the Gorini-Kossakowski-Sudarshan-Lindblad equation. In our model, this equation describes the evolution of experts' predictions about a geophysical structure. We proceed with the information approach to quantum theory and the subjective interpretation of quantum probabilities (due to quantum Bayesianism). This article is part of the theme issue `Hilbert's sixth problem'.

  4. From axiomatics of quantum probability to modelling geological uncertainty and management of intelligent hydrocarbon reservoirs with the theory of open quantum systems.

    Science.gov (United States)

    Lozada Aguilar, Miguel Ángel; Khrennikov, Andrei; Oleschko, Klaudia

    2018-04-28

    As was recently shown by the authors, quantum probability theory can be used for the modelling of the process of decision-making (e.g. probabilistic risk analysis) for macroscopic geophysical structures such as hydrocarbon reservoirs. This approach can be considered as a geophysical realization of Hilbert's programme on axiomatization of statistical models in physics (the famous sixth Hilbert problem). In this conceptual paper , we continue development of this approach to decision-making under uncertainty which is generated by complexity, variability, heterogeneity, anisotropy, as well as the restrictions to accessibility of subsurface structures. The belief state of a geological expert about the potential of exploring a hydrocarbon reservoir is continuously updated by outputs of measurements, and selection of mathematical models and scales of numerical simulation. These outputs can be treated as signals from the information environment E The dynamics of the belief state can be modelled with the aid of the theory of open quantum systems: a quantum state (representing uncertainty in beliefs) is dynamically modified through coupling with E ; stabilization to a steady state determines a decision strategy. In this paper, the process of decision-making about hydrocarbon reservoirs (e.g. 'explore or not?'; 'open new well or not?'; 'contaminated by water or not?'; 'double or triple porosity medium?') is modelled by using the Gorini-Kossakowski-Sudarshan-Lindblad equation. In our model, this equation describes the evolution of experts' predictions about a geophysical structure. We proceed with the information approach to quantum theory and the subjective interpretation of quantum probabilities (due to quantum Bayesianism).This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  5. Resolution dependence on phase extraction by the Hilbert transform in phase calibrated and dispersion compensated ultrahigh resolution spectrometer-based OCT

    DEFF Research Database (Denmark)

    Israelsen, Niels Møller; Maria, Michael; Feuchter, Thomas

    2018-01-01

    -linearities lead together to an unknown chirp of the detected interferogram. One method to compensate for the chirp is to perform a pixel-wavenumber calibration versus phase that requires numerical extraction of the phase. Typically a Hilbert transform algorithm is employed to extract the optical phase versus...... wavenumber for calibration and dispersion compensation. In this work we demonstrate UHR-OCT at 1300 nm using a Super continuum source and highlight the resolution constraints in using the Hilbert transform algorithm when extracting the optical phase for calibration and dispersion compensation. We demonstrate...... that the constraints cannot be explained purely by the numerical errors in the data processing module utilizing the Hilbert transform but must be dictated by broadening mechanisms originating from the experimentally obtained interferograms....

  6. Geometry and experience: Einstein's 1921 paper and Hilbert's axiomatic system

    International Nuclear Information System (INIS)

    De Gandt, Francois

    2006-01-01

    In his 1921 paper Geometrie und Erfahrung, Einstein decribes the new epistemological status of geometry, divorced from any intuitive or a priori content. He calls that 'axiomatics', following Hilbert's theoretical developments on axiomatic systems, which started with the stimulus given by a talk by Hermann Wiener in 1891 and progressed until the Foundations of geometry in 1899. Difficult questions arise: how is a theoretical system related to an intuitive empirical content?

  7. Vertex operators, non-abelian orbifolds and the Riemann-Hilbert problem

    International Nuclear Information System (INIS)

    Gato, B.; Massachusetts Inst. of Tech., Cambridge

    1990-01-01

    We show how to construct the oscillator part of vertex operators for the bosonic string moving on non-abelian orbifolds, using the conserved charges method. When the three-string vertices are twisted by non-commuting group elements, the construction of the conserved charges becomes the Riemann-Hilbert problem with monodromy matrices given by the twists. This is solvable for any given configuration and any non-abelian orbifold. (orig.)

  8. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks

    International Nuclear Information System (INIS)

    Zhuang Jiancang; Ogata, Yosihiko

    2006-01-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata et al., Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method

  9. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    Science.gov (United States)

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  10. Providing probability distributions for the causal pathogen of clinical mastitis using naive Bayesian networks

    NARCIS (Netherlands)

    Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.

    2009-01-01

    Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which

  11. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  12. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  13. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  14. Real analysis measure theory, integration, and Hilbert spaces

    CERN Document Server

    Stein, Elias M

    2005-01-01

    Real Analysis is the third volume in the Princeton Lectures in Analysis, a series of four textbooks that aim to present, in an integrated manner, the core areas of analysis. Here the focus is on the development of measure and integration theory, differentiation and integration, Hilbert spaces, and Hausdorff measure and fractals. This book reflects the objective of the series as a whole: to make plain the organic unity that exists between the various parts of the subject, and to illustrate the wide applicability of ideas of analysis to other fields of mathematics and science. After

  15. Poschl-Teller potentials based solution to Hilbert's tenth problem Pöschl-Teller potentials based solution to Hilbert's tenth problem

    Directory of Open Access Journals (Sweden)

    Juan Ospina

    2006-12-01

    Full Text Available Hypercomputers compute functions or numbers, or more generally solve problems or carry out tasks, that cannot be computed or solved by a Turing machine. An adaptation of Tien D. Kieu¿s quantum hypercomputational algorithm is carried out for the dynamical algebra su(1, 1 of the Poschl-Teller potentials. The classically incomputable problem that is resolved with this hypercomputational algorithm is Hilbert¿s tenth problem. We indicated that an essential mathematical condition of these algorithms is the existence of infinitedimensional unitary irreducible representations of low dimensional dynamical algebras that allow the construction of coherent states of the Barut-Girardello type. In addition, we presented as a particular case of our hypercomputational algorithm on Poschl-Teller potentials, the hypercomputational algorithm on an infinite square well presented previously by the authors.Los hipercomputadores computan funciones o números, o en general solucionan problemas que no pueden ser computados o solucionados por una máquina de Turing. Se presenta una adaptación del algoritmo cuántico hipercomputacional propuesto por Tien D. Kieu, al álgebra dinámica su(1, 1 realizada en los potenciales Pöschl-Teller. El problema clásicamente incomputable que se resuelve con este algoritmo hipercomputacional es el d´ecimo problema de Hilbert. Se señala que una condición matemática fundamental para estos algoritmos es la existencia de una representación unitaria infinito dimensional irreducible de álgebras de baja dimensión que admitan la construcción de estados coherentes del tipo Barut-Girardello. Adicionalmente se presenta como caso límite del algoritmo propuesto sobre los potenciales Pöschl-Teller, el algoritmo hipercomputacional sobre la caja de potencial infinita construido previamente por los autores.

  16. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  17. Theory of the unitary representations of compact groups

    International Nuclear Information System (INIS)

    Burzynski, A.; Burzynska, M.

    1979-01-01

    An introduction contains some basic notions used in group theory, Lie group, Lie algebras and unitary representations. Then we are dealing with compact groups. For these groups we show the problem of reduction of unitary representation of Wigner's projection operators, Clebsch-Gordan coefficients and Wigner-Eckart theorem. We show (this is a new approach) the representations reduction formalism by using superoperators in Hilbert-Schmidt space. (author)

  18. Analysis of 2H(d vector, p)3H reaction at 30-90 keV by four-body Faddeev-Yakubovsky equation

    International Nuclear Information System (INIS)

    Uzu, Eizo; Oryu, Shinsho; Tanifuji, Makoto.

    1993-01-01

    Low-energy 2 H(d vector, p) 3 H reactions are investigated by the four-body Faddeev-Yakubovsky equations. Cross sections and tensor analyzing powers are calculated at 30-90 keV energies. The PEST-1 potentials are used for nucleon-nucleon interactions. The [2+2] and [3+1] subamplitudes are treated by the Hilbert-Schmidt expansions. Numerical results give qualitative explanation of experimental data. (author)

  19. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  20. Spectra of turbulently advected scalars that have small Schmidt number

    Science.gov (United States)

    Hill, Reginald J.

    2017-09-01

    Exact statistical equations are derived for turbulent advection of a passive scalar having diffusivity much larger than the kinematic viscosity, i.e., small Schmidt number. The equations contain all terms needed for precise direct numerical simulation (DNS) quantification. In the appropriate limit, the equations reduce to the classical theory for which the scalar spectrum is proportional to the energy spectrum multiplied by k-4, which, in turn, results in the inertial-diffusive range power law, k-17 /3. The classical theory was derived for the case of isotropic velocity and scalar fields. The exact equations are simplified for less restrictive cases: (1) locally isotropic scalar fluctuations at dissipation scales with no restriction on symmetry of the velocity field, (2) isotropic velocity field with averaging over all wave-vector directions with no restriction on the symmetry of the scalar, motivated by that average being used for DNS, and (3) isotropic velocity field with axisymmetric scalar fluctuations, motivated by the mean-scalar-gradient-source case. The equations are applied to recently published DNSs of passive scalars for the cases of a freely decaying scalar and a mean-scalar-gradient source. New terms in the exact equations are estimated for those cases and are found to be significant; those terms cause the deviations from the classical theory found by the DNS studies. A new formula for the mean-scalar-gradient case explains the variation of the scalar spectra for the DNS of the smallest Schmidt-number cases. Expansion in Legendre polynomials reveals the effect of axisymmetry. Inertial-diffusive-range formulas for both the zero- and second-order Legendre contributions are given. Exact statistical equations reveal what must be quantified using DNS to determine what causes deviations from asymptotic relationships.

  1. Global Marine Science and Carlsberg - The Golden Connections of Johannes Schmidt (1877-1933) (Med dansksproget resume)

    DEFF Research Database (Denmark)

    Poulsen, Bo

    for the Exploration of the Sea (ICES), the Danish state and several private companies. Launching 26 oceangoing expeditions Schmidt made landmark discoveries such as the breeding ground for the Atlantic eel in the Sargasso Sea. The scientific frontier was pushed literally kilometres into the deep sea and across...

  2. Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert Spaces

    OpenAIRE

    Yukawa, Masahiro

    2014-01-01

    We propose a novel adaptive learning algorithm based on iterative orthogonal projections in the Cartesian product of multiple reproducing kernel Hilbert spaces (RKHSs). The task is estimating/tracking nonlinear functions which are supposed to contain multiple components such as (i) linear and nonlinear components, (ii) high- and low- frequency components etc. In this case, the use of multiple RKHSs permits a compact representation of multicomponent functions. The proposed algorithm is where t...

  3. Controlled G-Frames and Their G-Multipliers in Hilbert spaces

    OpenAIRE

    Rahimi, Asghar; Fereydooni, Abolhassan

    2012-01-01

    Multipliers have been recently introduced by P. Balazs as operators for Bessel sequences and frames in Hilbert spaces. These are operators that combine (frame-like) analysis, a multiplication with a fixed sequence (called the symbol) and synthesis. Weighted and controlled frames have been introduced to improve the numerical efficiency of iterative algorithms for inverting the frame operator Also g-frames are the most popular generalization of frames that include almost all of the frame extens...

  4. Noise properties of Hilbert transform evaluation

    International Nuclear Information System (INIS)

    Pavliček, Pavel; Svak, Vojtěch

    2015-01-01

    The Hilbert transform is a standard method for the calculation of the envelope and phase of a modulated signal in optical measurement methods. Usually, the intensity of light is converted into an electric signal at a detector. Therefore the actual spatially or temporally sampled signal is always affected by noise. Because the noise values of individual samples are independent, the noise can be considered as white. If the envelope and phase are calculated from the noised signal, they will also be affected by the noise. We calculate the variance and spectral density of both the envelope noise and the phase noise. We determine which parameters influence the variance and spectral density of both the envelope noise and the phase noise. Finally, we determine the influence of the noise on the measurement uncertainty in white-light interferometry and fringe-pattern analysis. (paper)

  5. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  6. Independence and totalness of subspaces in phase space methods

    Science.gov (United States)

    Vourdas, A.

    2018-04-01

    The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.

  7. Lie-algebra expansions, Chern-Simons theories and the Einstein-Hilbert Lagrangian

    International Nuclear Information System (INIS)

    Edelstein, Jose D.; Hassaine, Mokhtar; Troncoso, Ricardo; Zanelli, Jorge

    2006-01-01

    Starting from gravity as a Chern-Simons action for the AdS algebra in five dimensions, it is possible to modify the theory through an expansion of the Lie algebra that leads to a system consisting of the Einstein-Hilbert action plus non-minimally coupled matter. The modified system is gauge invariant under the Poincare group enlarged by an Abelian ideal. Although the resulting action naively looks like general relativity plus corrections due to matter sources, it is shown that the non-minimal couplings produce a radical departure from GR. Indeed, the dynamics is not continuously connected to the one obtained from Einstein-Hilbert action. In a matter-free configuration and in the torsionless sector, the field equations are too strong a restriction on the geometry as the metric must satisfy both the Einstein and pure Gauss-Bonnet equations. In particular, the five-dimensional Schwarzschild geometry fails to be a solution; however, configurations corresponding to a brane-world with positive cosmological constant on the worldsheet are admissible when one of the matter fields is switched on. These results can be extended to higher odd dimensions

  8. The projection operator in a Hilbert space and its directional derivative. Consequences for the theory of projected dynamical systems

    Directory of Open Access Journals (Sweden)

    George Isac

    2004-01-01

    Full Text Available In the first part of this paper we present a representation theorem for the directional derivative of the metric projection operator in an arbitrary Hilbert space. As a consequence of the representation theorem, we present in the second part the development of the theory of projected dynamical systems in infinite dimensional Hilbert space. We show that this development is possible if we use the viable solutions of differential inclusions. We use also pseudomonotone operators.

  9. Various models for pion probability distributions from heavy-ion collisions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.

    1998-01-01

    Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society

  10. Positive-definite functions and unitary representations of locally compact groups in a Hilbert space

    International Nuclear Information System (INIS)

    Gali, I.M.; Okb el-Bab, A.S.; Hassan, H.M.

    1977-08-01

    It is proved that the necessary and sufficient condition for the existence of an integral representation of a group of unitary operators in a Hilbert space is that it is positive-definite and continuous in some topology

  11. Analytical models of probability distribution and excess noise factor of solid state photomultiplier signals with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2012-01-01

    Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.

  12. International Roughness Index (IRI) measurement using Hilbert-Huang transform

    Science.gov (United States)

    Zhang, Wenjin; Wang, Ming L.

    2018-03-01

    International Roughness Index (IRI) is an important metric to measure condition of roadways. This index is usually used to justify the maintenance priority and scheduling for roadways. Various inspection methods and algorithms are used to assess this index through the use of road profiles. This study proposes to calculate IRI values using Hilbert-Huang Transform (HHT) algorithm. In particular, road profile data is provided using surface radar attached to a vehicle driving at highway speed. Hilbert-Huang transform (HHT) is used in this study because of its superior properties for nonstationary and nonlinear data. Empirical mode decomposition (EMD) processes the raw data into a set of intrinsic mode functions (IMFs), representing various dominating frequencies. These various frequencies represent noises from the body of the vehicle, sensor location, and the excitation induced by nature frequency of the vehicle, etc. IRI calculation can be achieved by eliminating noises that are not associated with the road profile including vehicle inertia effect. The resulting IRI values are compared favorably to the field IRI values, where the filtered IMFs captures the most characteristics of road profile while eliminating noises from the vehicle and the vehicle inertia effect. Therefore, HHT is an effect method for road profile analysis and for IRI measurement. Furthermore, the application of HHT method has the potential to eliminate the use of accelerometers attached to the vehicle as part of the displacement measurement used to offset the inertia effect.

  13. Conformal symmetries of the Einstein-Hilbert action on horizons of stationary and axisymmetric black holes

    International Nuclear Information System (INIS)

    Mei Jianwei

    2012-01-01

    We suggest a way to study possible conformal symmetries on black hole horizons. We do this by carrying out a Kaluza-Klein-like reduction of the Einstein-Hilbert action along the ignorable coordinates of stationary and axisymmetric black holes. Rigid diffeomorphism invariance of the m-ignorable coordinates then becomes a global SL(m, R) gauge symmetry of the reduced action. Related to each non-vanishing angular velocity, there is a particular SL(2, R) subgroup, which can be extended to the Witt algebra on the black hole horizons. The classical Einstein-Hilbert action thus has k-copies of infinite-dimensional conformal symmetries on a given black hole horizon, with k being the number of non-vanishing angular velocities of the black hole. (paper)

  14. Employing the Hilbert-Huang Transform to analyze observed natural complex signals: Calm wind meandering cases

    Science.gov (United States)

    Martins, Luis Gustavo Nogueira; Stefanello, Michel Baptistella; Degrazia, Gervásio Annes; Acevedo, Otávio Costa; Puhales, Franciano Scremin; Demarco, Giuliano; Mortarini, Luca; Anfossi, Domenico; Roberti, Débora Regina; Costa, Felipe Denardin; Maldaner, Silvana

    2016-11-01

    In this study we analyze natural complex signals employing the Hilbert-Huang spectral analysis. Specifically, low wind meandering meteorological data are decomposed into turbulent and non turbulent components. These non turbulent movements, responsible for the absence of a preferential direction of the horizontal wind, provoke negative lobes in the meandering autocorrelation functions. The meandering characteristic time scales (meandering periods) are determined from the spectral peak provided by the Hilbert-Huang marginal spectrum. The magnitudes of the temperature and horizontal wind meandering period obtained agree with the results found from the best fit of the heuristic meandering autocorrelation functions. Therefore, the new method represents a new procedure to evaluate meandering periods that does not employ mathematical expressions to represent observed meandering autocorrelation functions.

  15. States in the Hilbert space formulation and in the phase space formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Tosiek, J.; Brzykcy, P.

    2013-01-01

    We consider the problem of testing whether a given matrix in the Hilbert space formulation of quantum mechanics or a function considered in the phase space formulation of quantum theory represents a quantum state. We propose several practical criteria for recognising states in these two versions of quantum physics. After minor modifications, they can be applied to check positivity of any operators acting in a Hilbert space or positivity of any functions from an algebra with a ∗-product of Weyl type. -- Highlights: ► Methods of testing whether a given matrix represents a quantum state. ► The Stratonovich–Weyl correspondence on an arbitrary symplectic manifold. ► Criteria for checking whether a function on a symplectic space is a Wigner function

  16. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  17. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  18. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  19. Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations

    Science.gov (United States)

    Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana

    2018-05-01

    The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.

  20. Uniform sparse bounds for discrete quadratic phase Hilbert transforms

    Science.gov (United States)

    Kesler, Robert; Arias, Darío Mena

    2017-09-01

    For each α \\in T consider the discrete quadratic phase Hilbert transform acting on finitely supported functions f : Z → C according to H^{α }f(n):= \\sum _{m ≠ 0} e^{iα m^2} f(n - m)/m. We prove that, uniformly in α \\in T , there is a sparse bound for the bilinear form for every pair of finitely supported functions f,g : Z→ C . The sparse bound implies several mapping properties such as weighted inequalities in an intersection of Muckenhoupt and reverse Hölder classes.

  1. Multipliers for continuous frames in Hilbert spaces

    International Nuclear Information System (INIS)

    Balazs, P; Bayer, D; Rahimi, A

    2012-01-01

    In this paper, we examine the general theory of continuous frame multipliers in Hilbert space. These operators are a generalization of the widely used notion of (discrete) frame multipliers. Well-known examples include anti-Wick operators, STFT multipliers or Calderón–Toeplitz operators. Due to the possible peculiarities of the underlying measure spaces, continuous frames do not behave quite as their discrete counterparts. Nonetheless, many results similar to the discrete case are proven for continuous frame multipliers as well, for instance compactness and Schatten-class properties. Furthermore, the concepts of controlled and weighted frames are transferred to the continuous setting. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)

  2. Bulk entanglement gravity without a boundary: Towards finding Einstein's equation in Hilbert space

    Science.gov (United States)

    Cao, ChunJun; Carroll, Sean M.

    2018-04-01

    We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how radon transforms can be used to convert these data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's "entanglement equilibrium" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime.

  3. Hilbert scheme of points on cyclic quotient singularities of type (p,1)

    OpenAIRE

    Gyenge, Ádám

    2016-01-01

    In this note we investigate the generating series of the Euler characteristics of Hilbert scheme of points on cyclic quotient singularities of type (p,1). We link the appearing combinatorics to p-fountains, a generalization of the notion of fountain of coins. We obtain a representation of the generating series as coefficient of a two variable generating series.

  4. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  5. Evaluating the suitability of wind speed probability distribution models: A case of study of east and southeast parts of Iran

    International Nuclear Information System (INIS)

    Alavi, Omid; Mohammadi, Kasra; Mostafaeipour, Ali

    2016-01-01

    Highlights: • Suitability of different wind speed probability functions is assessed. • 5 stations distributed in east and south-east of Iran are considered as case studies. • Nakagami distribution is tested for first time and compared with 7 other functions. • Due to difference in wind features, best function is not similar for all stations. - Abstract: Precise information of wind speed probability distribution is truly significant for many wind energy applications. The objective of this study is to evaluate the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran. Nakagami distribution function is utilized for the first time to estimate the distribution of wind speed. The performance of Nakagami function is compared with seven typically used distribution functions. The achieved results reveal that the more effective function is not similar among all stations. Wind speed characteristics, quantity and quality of the recorded wind speed data can be considered as influential parameters on the performance of the distribution functions. Also, the skewness of the recorded wind speed data may have influence on the accuracy of the Nakagami distribution. For Chabahar and Khaf stations the Nakagami distribution shows the highest performance while for Lutak, Rafsanjan and Zabol stations the Gamma, Generalized Extreme Value and Inverse-Gaussian distributions offer the best fits, respectively. Based on the analysis, the Nakagami distribution can generally be considered as an effective distribution since it provides the best fits in 2 stations and ranks 3rd to 5th in the remaining stations; however, due to the close performance of the Nakagami and Weibull distributions and also flexibility of the Weibull function as its widely proven feature, more assessments on the performance of the Nakagami distribution are required.

  6. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  7. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  8. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.

    Science.gov (United States)

    Stępień, Grzegorz

    2018-03-17

    The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  9. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space

    Directory of Open Access Journals (Sweden)

    Grzegorz Stępień

    2018-03-01

    Full Text Available The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems—interior and exterior orientation of sensors—to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data. The accuracy of the results in the laboratory test is on the level of 10−6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author’s 2017 Total Free Station (TFS transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation—MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  10. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  11. Hilbert spaces contractively included in the Hardy space of the bidisk

    NARCIS (Netherlands)

    Alpay, D.; Bolotnikov, V.; Dijksma, A.; Sadosky, C.

    We study the reproducing kernel Hilbert spaces h(D-2,S) with kernels of the form I-S(z(1),z(2)>)S(w(1),w(2))*/(1-z(1)w(1)*) (1-z(2)w(2)*) where S(z(1),z(2)) is a Schur function of two variables z(1),z(2)is an element of D. They are analogs of the spaces h(D,S) with reproducing kernel

  12. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  13. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  14. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  15. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  16. The probability distribution of extreme precipitation

    Science.gov (United States)

    Korolev, V. Yu.; Gorshenin, A. K.

    2017-12-01

    On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher-Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.

  17. Regular Riemann-Hilbert transforms, Baecklund transformations and hidden symmetry algebra for some linearization systems

    International Nuclear Information System (INIS)

    Chau Ling-Lie; Ge Mo-Lin; Teh, Rosy.

    1984-09-01

    The Baecklund Transformations and the hidden symmetry algebra for Self-Dual Yang-Mills Equations, Landau-Lifshitz equations and the Extended Super Yang-Mills fields (N>2) are discussed on the base of the Regular Riemann-Hilbert Transform and the linearization equations. (author)

  18. Snow-melt flood frequency analysis by means of copula based 2D probability distributions for the Narew River in Poland

    Directory of Open Access Journals (Sweden)

    Bogdan Ozga-Zielinski

    2016-06-01

    New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.

  19. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  20. The solution of the sixth Hilbert problem: the ultimate Galilean revolution

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2018-04-01

    I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: `physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as `clock', `rigid rod', `force', `inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory. This article is part of the theme issue `Hilbert's sixth problem'.

  1. Probability distribution of dose rates in the body tissue as a function of the rhytm of Sr90 administration and the age of animals

    International Nuclear Information System (INIS)

    Rasin, I.M.; Sarapul'tsev, I.A.

    1975-01-01

    The probability distribution of tissue radiation doses in the skeleton were studied in experiments on swines and dogs. When introducing Sr-90 into the organism from the day of birth till 90 days dose rate probability distribution is characterized by one, or, for adult animals, by two independent aggregates. Each of these aggregates correspond to the normal distribution law

  2. Geometry of quantum dynamics in infinite-dimensional Hilbert space

    Science.gov (United States)

    Grabowski, Janusz; Kuś, Marek; Marmo, Giuseppe; Shulman, Tatiana

    2018-04-01

    We develop a geometric approach to quantum mechanics based on the concept of the Tulczyjew triple. Our approach is genuinely infinite-dimensional, i.e. we do not restrict considerations to finite-dimensional Hilbert spaces, contrary to many other works on the geometry of quantum mechanics, and include a Lagrangian formalism in which self-adjoint (Schrödinger) operators are obtained as Lagrangian submanifolds associated with the Lagrangian. As a byproduct we also obtain results concerning coadjoint orbits of the unitary group in infinite dimensions, embedding of pure states in the unitary group, and self-adjoint extensions of symmetric relations.

  3. Assessing the Adequacy of Probability Distributions for Estimating the Extreme Events of Air Temperature in Dabaa Region

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2015-01-01

    Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October

  4. Nonrelativistic multichannel quantum scattering theory in a two Hilbert space formulation

    International Nuclear Information System (INIS)

    Chandler, C.

    1977-08-01

    A two-Hilbert-space form of an abstract scattering theory specifically applicable to multichannel quantum scattering problems is outlined. General physical foundations of the theory are reviewed. Further topics discussed include the invariance principle, asymptotic completeness of the wave operators, representations of the scattering operator in terms of transition operators and fundamental equations that these transition operators satisfy. Outstanding problems, including the difficulties of including Coulomb interactions in the theory, are pointed out. (D.P.)

  5. Approximately dual frames in Hilbert spaces and applications to Gabor frames

    OpenAIRE

    Christensen, Ole; Laugesen, Richard S.

    2011-01-01

    Approximately dual frames are studied in the Hilbert space setting. Approximate duals are easier to construct than classical dual frames, and can be tailored to yield almost perfect reconstruction. Bounds on the deviation from perfect reconstruction are obtained for approximately dual frames constructed via perturbation theory. An alternative bound is derived for the rich class of Gabor frames, by using the Walnut representation of the frame operator to estimate the deviation from equality in...

  6. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  7. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400, La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)

    2009-09-15

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  8. Perturbation for Frames for a Subspace of a Hilbert Space

    DEFF Research Database (Denmark)

    Christensen, Ole; deFlicht, C.; Lennard, C.

    1997-01-01

    We extend a classical result stating that a sufficiently small perturbation$\\{ g_i \\}$ of a Riesz sequence $\\{ f_i \\}$ in a Hilbert space $H$ is again a Riesz sequence. It turns out that the analog result for a frame does not holdunless the frame is complete. However, we are able to prove a very...... similarresult for frames in the case where the gap between the subspaces$\\overline{span} \\{f_i \\}$ and $\\overline{span} \\{ g_i \\}$ is small enough. We give a geometric interpretation of the result....

  9. Riemann-Hilbert approach to the time-dependent generalized sine kernel

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K.

    2010-12-15

    We derive the leading asymptotic behavior and build a new series representation for the Fredholm determinant of integrable integral operators appearing in the representation of the time and distance dependent correlation functions of integrable models described by a six-vertex R-matrix. This series representation opens a systematic way for the computation of the long-time, long-distance asymptotic expansion for the correlation functions of the aforementioned integrable models away from their free fermion point. Our method builds on a Riemann-Hilbert based analysis. (orig.)

  10. A probability space for quantum models

    Science.gov (United States)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  11. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  12. Quantum computation via local control theory: Direct sum vs. direct product Hilbert spaces

    International Nuclear Information System (INIS)

    Sklarz, Shlomo E.; Tannor, David J.

    2006-01-01

    The central objective in any quantum computation is the creation of a desired unitary transformation; the mapping that this unitary transformation produces between the input and output states is identified with the computation. In [S.E. Sklarz, D.J. Tannor, arXiv:quant-ph/0404081 (submitted to PRA) (2004)] it was shown that local control theory can be used to calculate fields that will produce such a desired unitary transformation. In contrast with previous strategies for quantum computing based on optimal control theory, the local control scheme maintains the system within the computational subspace at intermediate times, thereby avoiding unwanted decay processes. In [S.E. Sklarz et al.], the structure of the Hilbert space had a direct sum structure with respect to the computational register and the mediating states. In this paper, we extend the formalism to the important case of a direct product Hilbert space. The final equations for the control algorithm for the two cases are remarkably similar in structure, despite the fact that the derivations are completely different and that in one case the dynamics is in a Hilbert space and in the other case the dynamics is in a Liouville space. As shown in [S.E. Sklarz et al.], the direct sum implementation leads to a computational mechanism based on virtual transitions, and can be viewed as an extension of the principles of Stimulated Raman Adiabatic Passage from state manipulation to evolution operator manipulation. The direct product implementation developed here leads to the intriguing concept of virtual entanglement - computation that exploits second-order transitions that pass through entangled states but that leaves the subsystems nearly separable at all intermediate times. Finally, we speculate on a connection between the algorithm developed here and the concept of decoherence free subspaces

  13. A more accurate half-discrete Hardy-Hilbert-type inequality with the logarithmic function.

    Science.gov (United States)

    Wang, Aizhen; Yang, Bicheng

    2017-01-01

    By means of the weight functions, the technique of real analysis and Hermite-Hadamard's inequality, a more accurate half-discrete Hardy-Hilbert-type inequality related to the kernel of logarithmic function and a best possible constant factor is given. Moreover, the equivalent forms, the operator expressions, the reverses and some particular cases are also considered.

  14. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  15. Deteksi Kerusakan Batang Rotor Pada Motor Induksi Menggunakan Analisis Arus Mula Berbasis Hilbert Transform

    Directory of Open Access Journals (Sweden)

    Isti Qomah

    2017-01-01

    Full Text Available Kerusakan batang rotor merupakan salah satu jenis kerusakan pada motor induksi yang dapat menyebabkan masalah serius. Kerusakan tersebut dapat mencapai 5% - 10% dari seluruh kasus gangguan motor induksi. Oleh karena itu, perlu adanya diagnosis awal yang mendeteksi adanya gangguan pada rotor motor induksi, agar dapat dilakukan perbaikan lebih cepat dan tanggap sebelum terjadi gangguan yang lebih besar. Tugas Akhir ini membahas terkait teknik deteksi kerusakan batang rotor pada motor induksi dengan menggunakan analisis arus mula. Sistem yang digunakan berbasis  decomposition wavelet transform terlebih dahulu kemudian dilanjutkan dengan analisis berbasis hilbert transform sebagai perangkat pengolahan sinyal sehingga mampu mendeteksi motor dalam keadaan sehat atau mengalami kerusakan. Pengujian sistem dilakukan dalam beberapa kondisi, yaitu kondisi tanpa beban dan berbeban. Selain itu, kondisi yang diberikan adalah kecacatan mulai dai 1BRB hingga 3BRB. Hasil pengujian membuktikan bahwa decomposition wavelet transform dan Hilbert transform mampu mendeteksi perbedaan kondisi pada motor induksi normal ataupun rusak pada batang rotor.

  16. Estágio profissional de arquitetura paisagista no Atelier Rainer Schmidt Landscape Architects

    OpenAIRE

    Côdea, Rita Guadalupe Martins

    2015-01-01

    Este relatório pretende descrever o trabalho desenvolvido no decorrer do estágio académico em ambiente profissional, etapa última do mestrado em Arquitetura Paisagista, levado a cabo no atelier Rainer Schmidt Landscape Architects. Pretende ainda constituir uma reflexão sobre o métier e estabelecer-se como ponte de ligação entre os conhecimentos académicos e a sua aplicação prática em meio profissional. No essencial, é relatada a experiência vivenciada no acompanhamento do ...

  17. p-adic probability interpretation of Bell's inequality

    International Nuclear Information System (INIS)

    Khrennikov, A.

    1995-01-01

    We study the violation of Bell's inequality using a p-adic generalization of the theory of probability. p-adic probability is introduced as a limit of relative frequencies but this limit exists with respect to a p-adic metric. In particular, negative probability distributions are well defined on the basis of the frequency definition. This new type of stochastics can be used to describe hidden-variables distributions of some quantum models. If the hidden variables have a p-adic probability distribution, Bell's inequality is not valid and it is not necessary to discuss the experimental violations of this inequality. ((orig.))

  18. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    Science.gov (United States)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  19. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... of the station’s infrastructure layout and plan of operation. However, these three methods do not take the timetable at the station into consideration. Therefore, two methods are introduced in this paper, making it possible to estimate the robustness of different timetables at a station or different...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time...

  20. A second list of new planetary nebulae found on United Kingdom 1.2-m Schmidt telescope plates

    International Nuclear Information System (INIS)

    Longmore, A.J.; Tritton, S.B.

    1980-01-01

    Positions, photographs and descriptions are given for 11 new planetary nebulae discovered on United Kingdom Schmidt plates. One of the planetary nebulae has the highest galactic latitude of any known planetary, and may be associated with a magnitude 9 G5 star. Near-infrared (J,H,K) magnitudes are given for the star. (author)