WorldWideScience

Sample records for two-dimensional probability distributions

  1. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  2. Lorentz covariant tempered distributions in two-dimensional space-time

    International Nuclear Information System (INIS)

    Zinov'ev, Yu.M.

    1989-01-01

    The problem of describing Lorentz covariant distributions without any spectral condition has hitherto remained unsolved even for two-dimensional space-time. Attempts to solve this problem have already been made. Zharinov obtained an integral representation for the Laplace transform of Lorentz invariant distributions with support in the product of two-dimensional future light cones. However, this integral representation does not make it possible to obtain a complete description of the corresponding Lorentz invariant distributions. In this paper the author gives a complete description of Lorentz covariant distributions for two-dimensional space-time. No spectral conditions is assumed

  3. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  4. Collision probability in two-dimensional lattice by ray-trace method and its applications to cell calculations

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-03-01

    A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)

  5. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    Science.gov (United States)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  6. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  7. On the size distribution of one-, two- and three-dimensional Voronoi cells

    International Nuclear Information System (INIS)

    Marthinsen, K.

    1994-03-01

    The present report gives a presentation of the different cell size distribution obtained by computer simulations of random Voronoi cell structures in one-, two- and three-dimensional space. The random Voronoi cells are constructed from cell centroids randomly distributed along a string, in the plane and in three-dimensional space, respectively. The size distributions are based on 2-3 · 10 4 cells. For the spacial polyhedra both the distribution of volumes, areas and radii are presented, and the two latter quantities are compared to the distributions of areas and radii from a planar section through the three-dimensional structure as well as to the corresponding distributions obtained from a pure two-dimensional cell structure. 11 refs., 11 figs

  8. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  9. Probability distribution of magnetization in the one-dimensional Ising model: effects of boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Antal, T [Physics Department, Simon Fraser University, Burnaby, BC V5A 1S6 (Canada); Droz, M [Departement de Physique Theorique, Universite de Geneve, CH 1211 Geneva 4 (Switzerland); Racz, Z [Institute for Theoretical Physics, Eoetvoes University, 1117 Budapest, Pazmany setany 1/a (Hungary)

    2004-02-06

    Finite-size scaling functions are investigated both for the mean-square magnetization fluctuations and for the probability distribution of the magnetization in the one-dimensional Ising model. The scaling functions are evaluated in the limit of the temperature going to zero (T {yields} 0), the size of the system going to infinity (N {yields} {infinity}) while N[1 - tanh(J/k{sub B}T)] is kept finite (J being the nearest neighbour coupling). Exact calculations using various boundary conditions (periodic, antiperiodic, free, block) demonstrate explicitly how the scaling functions depend on the boundary conditions. We also show that the block (small part of a large system) magnetization distribution results are identical to those obtained for free boundary conditions.

  10. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  11. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  12. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  13. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  14. OPT-TWO: Calculation code for two-dimensional MOX fuel models in the optimum concentration distribution

    International Nuclear Information System (INIS)

    Sato, Shohei; Okuno, Hiroshi; Sakai, Tomohiro

    2007-08-01

    OPT-TWO is a calculation code which calculates the optimum concentration distribution, i.e., the most conservative concentration distribution in the aspect of nuclear criticality safety, of MOX (mixed uranium and plutonium oxide) fuels in the two-dimensional system. To achieve the optimum concentration distribution, we apply the principle of flattened fuel importance distribution with which the fuel system has the highest reactivity. Based on this principle, OPT-TWO takes the following 3 calculation steps iteratively to achieve the optimum concentration distribution with flattened fuel importance: (1) the forward and adjoint neutron fluxes, and the neutron multiplication factor, with TWOTRAN code which is a two-dimensional neutron transport code based on the SN method, (2) the fuel importance, and (3) the quantity of the transferring fuel. In OPT-TWO, the components of MOX fuel are MOX powder, uranium dioxide powder and additive. This report describes the content of the calculation, the computational method, and the installation method of the OPT-TWO, and also describes the application method of the criticality calculation of OPT-TWO. (author)

  15. Energy Spectra of Vortex Distributions in Two-Dimensional Quantum Turbulence

    Directory of Open Access Journals (Sweden)

    Ashton S. Bradley

    2012-10-01

    Full Text Available We theoretically explore key concepts of two-dimensional turbulence in a homogeneous compressible superfluid described by a dissipative two-dimensional Gross-Pitaeveskii equation. Such a fluid supports quantized vortices that have a size characterized by the healing length ξ. We show that, for the divergence-free portion of the superfluid velocity field, the kinetic-energy spectrum over wave number k may be decomposed into an ultraviolet regime (k≫ξ^{-1} having a universal k^{-3} scaling arising from the vortex core structure, and an infrared regime (k≪ξ^{-1} with a spectrum that arises purely from the configuration of the vortices. The Novikov power-law distribution of intervortex distances with exponent -1/3 for vortices of the same sign of circulation leads to an infrared kinetic-energy spectrum with a Kolmogorov k^{-5/3} power law, which is consistent with the existence of an inertial range. The presence of these k^{-3} and k^{-5/3} power laws, together with the constraint of continuity at the smallest configurational scale k≈ξ^{-1}, allows us to derive a new analytical expression for the Kolmogorov constant that we test against a numerical simulation of a forced homogeneous, compressible, two-dimensional superfluid. The numerical simulation corroborates our analysis of the spectral features of the kinetic-energy distribution, once we introduce the concept of a clustered fraction consisting of the fraction of vortices that have the same sign of circulation as their nearest neighboring vortices. Our analysis presents a new approach to understanding two-dimensional quantum turbulence and interpreting similarities and differences with classical two-dimensional turbulence, and suggests new methods to characterize vortex turbulence in two-dimensional quantum fluids via vortex position and circulation measurements.

  16. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    DEFF Research Database (Denmark)

    Bacco, Davide; Christensen, Jesper Bjerge; Usuga Castaneda, Mario A.

    2016-01-01

    10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak......Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last...... coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable....

  17. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    Science.gov (United States)

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-12-01

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.

  18. Device for measuring the two-dimensional distribution of a radioactive substance on a surface

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A device is described by which, using a one-dimensional measuring proportional counter tube depending on position, one can measure the two-dimensionally distributed radioactivity of a surface and can plot this to scale two-dimensionally, after computer processing, or can show it two-dimensionally on a monitor. (orig.) [de

  19. An analysis of infiltration with moisture content distribution in a two-dimensional discretized water content domain

    KAUST Repository

    Yu, Han; Douglas, Craig C.

    2014-01-01

    On the basis of unsaturated Darcy's law, the Talbot-Ogden method provides a fast unconditional mass conservative algorithm to simulate groundwater infiltration in various unsaturated soil textures. Unlike advanced reservoir modelling methods that compute unsaturated flow in space, it only discretizes the moisture content domain into a suitable number of bins so that the vertical water movement is estimated piecewise in each bin. The dimensionality of the moisture content domain is extended from one dimensional to two dimensional in this study, which allows us to distinguish pore shapes within the same moisture content range. The vertical movement of water in the extended model imitates the infiltration phase in the Talbot-Ogden method. However, the difference in this extension is the directional redistribution, which represents the horizontal inter-bin flow and causes the water content distribution to have an effect on infiltration. Using this extension, we mathematically analyse the general relationship between infiltration and the moisture content distribution associated with wetting front depths in different bins. We show that a more negatively skewed moisture content distribution can produce a longer ponding time, whereas a higher overall flux cannot be guaranteed in this situation. It is proven on the basis of the water content probability distribution independent of soil textures. To illustrate this analysis, we also present numerical examples for both fine and coarse soil textures.

  20. An analysis of infiltration with moisture content distribution in a two-dimensional discretized water content domain

    KAUST Repository

    Yu, Han

    2014-06-11

    On the basis of unsaturated Darcy\\'s law, the Talbot-Ogden method provides a fast unconditional mass conservative algorithm to simulate groundwater infiltration in various unsaturated soil textures. Unlike advanced reservoir modelling methods that compute unsaturated flow in space, it only discretizes the moisture content domain into a suitable number of bins so that the vertical water movement is estimated piecewise in each bin. The dimensionality of the moisture content domain is extended from one dimensional to two dimensional in this study, which allows us to distinguish pore shapes within the same moisture content range. The vertical movement of water in the extended model imitates the infiltration phase in the Talbot-Ogden method. However, the difference in this extension is the directional redistribution, which represents the horizontal inter-bin flow and causes the water content distribution to have an effect on infiltration. Using this extension, we mathematically analyse the general relationship between infiltration and the moisture content distribution associated with wetting front depths in different bins. We show that a more negatively skewed moisture content distribution can produce a longer ponding time, whereas a higher overall flux cannot be guaranteed in this situation. It is proven on the basis of the water content probability distribution independent of soil textures. To illustrate this analysis, we also present numerical examples for both fine and coarse soil textures.

  1. Bayesian approach for peak detection in two-dimensional chromatography.

    Science.gov (United States)

    Vivó-Truyols, Gabriel

    2012-03-20

    A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.

  2. Stress distribution in two-dimensional silos

    Science.gov (United States)

    Blanco-Rodríguez, Rodolfo; Pérez-Ángel, Gabriel

    2018-01-01

    Simulations of a polydispersed two-dimensional silo were performed using molecular dynamics, with different numbers of grains reaching up to 64 000, verifying numerically the model derived by Janssen and also the main assumption that the walls carry part of the weight due to the static friction between grains with themselves and those with the silo's walls. We vary the friction coefficient, the radii dispersity, the silo width, and the size of grains. We find that the Janssen's model becomes less relevant as the the silo width increases since the behavior of the stresses becomes more hydrostatic. Likewise, we get the normal and tangential stress distribution on the walls evidencing the existence of points of maximum stress. We also obtained the stress matrix with which we observe zones of concentration of load, located always at a height around two thirds of the granular columns. Finally, we observe that the size of the grains affects the distribution of stresses, increasing the weight on the bottom and reducing the normal stress on the walls, as the grains are made smaller (for the same total mass of the granulate), giving again a more hydrostatic and therefore less Janssen-type behavior for the weight of the column.

  3. Survival probability in a one-dimensional quantum walk on a trapped lattice

    International Nuclear Information System (INIS)

    Goenuelol, Meltem; Aydiner, Ekrem; Shikano, Yutaka; Muestecaplioglu, Oezguer E

    2011-01-01

    The dynamics of the survival probability of quantum walkers on a one-dimensional lattice with random distribution of absorbing immobile traps is investigated. The survival probability of quantum walkers is compared with that of classical walkers. It is shown that the time dependence of the survival probability of quantum walkers has a piecewise stretched exponential character depending on the density of traps in numerical and analytical observations. The crossover between the quantum analogues of the Rosenstock and Donsker-Varadhan behavior is identified.

  4. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  5. Two multi-dimensional uncertainty relations

    International Nuclear Information System (INIS)

    Skala, L; Kapsa, V

    2008-01-01

    Two multi-dimensional uncertainty relations, one related to the probability density and the other one related to the probability density current, are derived and discussed. Both relations are stronger than the usual uncertainty relations for the coordinates and momentum

  6. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  7. Representative measurement of two-dimensional reactive phosphate distributions and co-distributed iron(II) and sulfide in seagrass sediment porewaters

    DEFF Research Database (Denmark)

    Pagès, Anaïs; Teasdale, Peter R.; Robertson, David

    2011-01-01

    The high degree of heterogeneity within sediments can make interpreting one-dimensional measurements difficult. The recent development and use of in situ techniques that measure two-dimensional distributions of porewater solutes have facilitated investigation of the role of spatial heterogeneity ...

  8. Two-dimensional atom localization via two standing-wave fields in a four-level atomic system

    International Nuclear Information System (INIS)

    Zhang Hongtao; Wang Hui; Wang Zhiping

    2011-01-01

    We propose a scheme for the two-dimensional (2D) localization of an atom in a four-level Y-type atomic system. By applying two orthogonal standing-wave fields, the atoms can be localized at some special positions, leading to the formation of sub-wavelength 2D periodic spatial distributions. The localization peak position and number as well as the conditional position probability can be controlled by the intensities and detunings of optical fields.

  9. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  10. Measurement of two-dimensional thermal neutron flux in a water phantom and evaluation of dose distribution characteristics

    International Nuclear Information System (INIS)

    Yamamoto, Kazuyoshi; Kumada, Hiroaki; Kishi, Toshiaki; Torii, Yoshiya; Horiguchi, Yoji

    2001-03-01

    To evaluate nitrogen dose, boron dose and gamma-ray dose occurred by neutron capture reaction of the hydrogen at the medical irradiation, two-dimensional distribution of the thermal neutron flux is very important because these doses are proportional to the thermal neutron distribution. This report describes the measurement of the two-dimensional thermal neutron distribution in a head water phantom by neutron beams of the JRR-4 and evaluation of the dose distribution characteristic. Thermal neutron flux in the phantom was measured by gold wire placed in the spokewise of every 30 degrees in order to avoid the interaction. Distribution of the thermal neutron flux was also calculated using two-dimensional Lagrange's interpolation program (radius, angle direction) developed this time. As a result of the analysis, it was confirmed to become distorted distribution which has annular peak at outside of the void, though improved dose profile of the deep direction was confirmed in the case which the radiation field in the phantom contains void. (author)

  11. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  12. Decoherence in two-dimensional quantum walks

    International Nuclear Information System (INIS)

    Oliveira, A. C.; Portugal, R.; Donangelo, R.

    2006-01-01

    We analyze the decoherence in quantum walks in two-dimensional lattices generated by broken-link-type noise. In this type of decoherence, the links of the lattice are randomly broken with some given constant probability. We obtain the evolution equation for a quantum walker moving on two-dimensional (2D) lattices subject to this noise, and we point out how to generalize for lattices in more dimensions. In the nonsymmetric case, when the probability of breaking links in one direction is different from the probability in the perpendicular direction, we have obtained a nontrivial result. If one fixes the link-breaking probability in one direction, and gradually increases the probability in the other direction from 0 to 1, the decoherence initially increases until it reaches a maximum value, and then it decreases. This means that, in some cases, one can increase the noise level and still obtain more coherence. Physically, this can be explained as a transition from a decoherent 2D walk to a coherent 1D walk

  13. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  14. Two-dimensional beam profiles and one-dimensional projections

    Science.gov (United States)

    Findlay, D. J. S.; Jones, B.; Adams, D. J.

    2018-05-01

    One-dimensional projections of improved two-dimensional representations of transverse profiles of particle beams are proposed for fitting to data from harp-type monitors measuring beam profiles on particle accelerators. Composite distributions, with tails smoothly matched on to a central (inverted) parabola, are shown to give noticeably better fits than single gaussian and single parabolic distributions to data from harp-type beam profile monitors all along the proton beam transport lines to the two target stations on the ISIS Spallation Neutron Source. Some implications for inferring beam current densities on the beam axis are noted.

  15. A two dimensional approach for temperature distribution in reactor lower head during severe accident

    International Nuclear Information System (INIS)

    Cao, Zhen; Liu, Xiaojing; Cheng, Xu

    2015-01-01

    Highlights: • Two dimensional module is developed to analyze integrity of lower head. • Verification step has been done to evaluate feasibility of new module. • The new module is applied to simulate large-scale advanced PWR. • Importance of 2-D approach is clearly quantified. • Major parameters affecting vessel temperature distribution are identified. - Abstract: In order to evaluate the safety margin during a postulated severe accident, a module named ASAP-2D (Accident Simulation on Pressure vessel-2 Dimensional), which can be implemented into the severe accident simulation codes (such as ATHLET-CD), is developed in Shanghai Jiao Tong University. Based on two-dimensional spherical coordinates, heat conduction equation for transient state is solved implicitly. Together with solid vessel thickness, heat flux distribution and heat transfer coefficient at outer vessel surface are obtained. Heat transfer regime when critical heat flux has been exceeded (POST-CHF regime) could be simulated in the code, and the transition behavior of boiling crisis (from spatial and temporal points of view) can be predicted. The module is verified against a one-dimensional analytical solution with uniform heat flux distribution, and afterwards this module is applied to the benchmark illustrated in NUREG/CR-6849. Benchmark calculation indicates that maximum heat flux at outer surface of RPV could be around 20% lower than that of at inner surface due to two-dimensional heat conduction. Then a preliminary analysis is performed on the integrity of the reactor vessel for which the geometric parameters and boundary conditions are derived from a large scale advanced pressurized water reactor. Results indicate that heat flux remains lower than critical heat flux. Sensitivity analysis indicates that outer heat flux distribution is more sensitive to input heat flux distribution and the transition boiling correlation than mass flow rate in external reactor vessel cooling (ERVC) channel

  16. Analysis of two-dimensional microdischarge distribution in dielectric-barrier discharges

    International Nuclear Information System (INIS)

    Chirokov, A; Gutsol, A; Fridman, A; Sieber, K D; Grace, J M; Robinson, K S

    2004-01-01

    The two-dimensional spatial distribution of microdischarges in atmospheric pressure dielectric-barrier discharges (DBDs) in air was studied. Experimental images of DBDs (Lichtenberg figures) were obtained using photostimulable phosphors. The storage phosphor imaging method takes advantage of the linear response of the phosphor for characterization of microdischarge intensity and position. A microdischarge interaction model in DBDs is proposed and a Monte Carlo simulation of microdischarge interactions in the discharge is presented. Comparison of modelled and experimental images indicates interactions and short-range structuring of microdischarge channels

  17. Ion distributions in a two-dimensional reconnection field geometry

    International Nuclear Information System (INIS)

    Curran, D.B.; Goertz, C.K.; Whelan, T.A.

    1987-01-01

    ISEE observations have shown trapped ion distributions in the magnetosphere along with streaming ion distributions in the magnetosheath. The more energetic ion beams are found to exist further away from the magnetopause than lower-energy ion beams. In order to understand these properties of the data, we have taken a simple two-dimensional reconnection model which contains a neutral line and an azimuthal electric field and compared its predictions with the experimental data of September 8, 1978. Our model explains trapped particles in the magnetosphere due to nonadiabatic mirroring in the magnetosheath and streaming ions in the magnetosheath due to energization at the magnetopause. The model also shows the higher-energy ions extending further into the magnetosheath, away from the magnetopause than the lower-energy ions. This suggests the ion data of September 8, 1978 are consistent with a reconnection geometry. Copyright American Geophysical Union 1987

  18. [The reconstruction of two-dimensional distributions of gas concentration in the flat flame based on tunable laser absorption spectroscopy].

    Science.gov (United States)

    Jiang, Zhi-Shen; Wang, Fei; Xing, Da-Wei; Xu, Ting; Yan, Jian-Hua; Cen, Ke-Fa

    2012-11-01

    The experimental method by using the tunable diode laser absorption spectroscopy combined with the model and algo- rithm was studied to reconstruct the two-dimensional distribution of gas concentration The feasibility of the reconstruction program was verified by numerical simulation A diagnostic system consisting of 24 lasers was built for the measurement of H2O in the methane/air premixed flame. The two-dimensional distribution of H2O concentration in the flame was reconstructed, showing that the reconstruction results reflect the real two-dimensional distribution of H2O concentration in the flame. This diagnostic scheme provides a promising solution for combustion control.

  19. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  20. Non-parametric comparison of histogrammed two-dimensional data distributions using the Energy Test

    International Nuclear Information System (INIS)

    Reid, Ivan D; Lopes, Raul H C; Hobson, Peter R

    2012-01-01

    When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons could overwhelm human operators. However, the two-dimensional histogram comparison tools available in ROOT have been noted in the past to exhibit shortcomings. We discuss a newer comparison test for two-dimensional histograms, based on the Energy Test of Aslan and Zech, which provides more conclusive discrimination between histograms of data coming from different distributions than methods provided in a recent ROOT release.

  1. Crucial role of sidewalls in velocity distributions in quasi-two-dimensional granular gases

    NARCIS (Netherlands)

    van Zon, J.S.; Kreft, J.; Goldman, D.L.; Miracle, D.; Swift, J. B.; Swinney, H. L.

    2004-01-01

    The significance of sidewalls which yield velocity distributions with non-Gaussian tails and a peak near zero velocity in quasi-two-dimensional granular gases, was investigated. It was observed that the particles gained energy only through collisions with the bottom of the container, which was not

  2. Two-dimensional potential and charge distributions of positive surface streamer

    International Nuclear Information System (INIS)

    Tanaka, Daiki; Matsuoka, Shigeyasu; Kumada, Akiko; Hidaka, Kunihiko

    2009-01-01

    Information on the potential and the field profile along a surface discharge is required for quantitatively discussing and clarifying the propagation mechanism. The sensing technique with a Pockels crystal has been developed for directly measuring the potential and electric field distribution on a dielectric material. In this paper, the Pockels sensing system consists of a pulse laser and a CCD camera for measuring the instantaneous two-dimensional potential distribution on a 25.4 mm square area with a 50 μm sampling pitch. The temporal resolution is 3.2 ns which is determined by the pulse width of the laser emission. The transient change in the potential distribution of a positive surface streamer propagating in atmospheric air is measured with this system. The electric field and the charge distributions are also calculated from the measured potential profile. The propagating direction component of the electric field near the tip of the propagating streamer reaches 3 kV mm -1 . When the streamer stops, the potential distribution along a streamer forms an almost linear profile with the distance from the electrode, and its gradient is about 0.5 kV mm -1 .

  3. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  4. Benchmark numerical solutions for radiative heat transfer in two-dimensional medium with graded index distribution

    Energy Technology Data Exchange (ETDEWEB)

    Liu, L.H. [School of Energy Science and Engineering, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001 (China)]. E-mail: lhliu@hit.edu.cn

    2006-11-15

    In graded index media, the ray goes along a curved path determined by Fermat principle. Generally, the curved ray trajectory in graded index media is a complex implicit function, and the curved ray tracing is very difficult and complex. Only for some special refractive index distributions, the curved ray trajectory can be expressed as a simple explicit function. Two important examples are the layered and the radial graded index distributions. In this paper, the radiative heat transfer problems in two-dimensional square semitransparent with layered and radial graded index distributions are analyzed. After deduction of the ray trajectory, the radiative heat transfer problems are solved by using the Monte Carlo curved ray-tracing method. Some numerical solutions of dimensionless net radiative heat flux and medium temperature are tabulated as the benchmark solutions for the future development of approximation techniques for multi-dimensional radiative heat transfer in graded index media.

  5. Direct observation of two dimensional trace gas distributions with an airborne Imaging DOAS instrument

    Directory of Open Access Journals (Sweden)

    K.-P. Heue

    2008-11-01

    Full Text Available In many investigations of tropospheric chemistry information about the two dimensional distribution of trace gases on a small scale (e.g. tens to hundreds of metres is highly desirable. An airborne instrument based on imaging Differential Optical Absorption Spectroscopy has been built to map the two dimensional distribution of a series of relevant trace gases including NO2, HCHO, C2H2O2, H2O, O4, SO2, and BrO on a scale of 100 m.

    Here we report on the first tests of the novel aircraft instrument over the industrialised South African Highveld, where large variations in NO2 column densities in the immediate vicinity of several sources e.g. power plants or steel works, were measured. The observed patterns in the trace gas distribution are interpreted with respect to flux estimates, and it is seen that the fine resolution of the measurements allows separate sources in close proximity to one another to be distinguished.

  6. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  7. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  8. Two-Dimensional Key Table-Based Group Key Distribution in Advanced Metering Infrastructure

    Directory of Open Access Journals (Sweden)

    Woong Go

    2014-01-01

    Full Text Available A smart grid provides two-way communication by using the information and communication technology. In order to establish two-way communication, the advanced metering infrastructure (AMI is used in the smart grid as the core infrastructure. This infrastructure consists of smart meters, data collection units, maintenance data management systems, and so on. However, potential security problems of the AMI increase owing to the application of the public network. This is because the transmitted information is electricity consumption data for charging. Thus, in order to establish a secure connection to transmit electricity consumption data, encryption is necessary, for which key distribution is required. Further, a group key is more efficient than a pairwise key in the hierarchical structure of the AMI. Therefore, we propose a group key distribution scheme using a two-dimensional key table through the analysis result of the sensor network group key distribution scheme. The proposed scheme has three phases: group key predistribution, selection of group key generation element, and generation of group key.

  9. A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz

    Science.gov (United States)

    Onuma, Takashi; Otani, Yukitoshi

    2014-03-01

    A two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz is proposed. A polarization image sensor is developed as core device of the system. It is composed of a pixelated polarizer array made from photonic crystal and a parallel read out circuit with a multi-channel analog to digital converter specialized for two-dimensional polarization detection. By applying phase shifting algorism with circularly-polarized incident light, birefringence phase difference and azimuthal angle can be measured. The performance of the system is demonstrated experimentally by measuring actual birefringence distribution and polarization device such as Babinet-Soleil compensator.

  10. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Directory of Open Access Journals (Sweden)

    Pál Schmitt

    Full Text Available The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time. Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  11. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    Science.gov (United States)

    Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise

    2017-01-01

    The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  12. Transmission probability method for solving neutron transport equation in three-dimensional triangular-z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Liu Guoming [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)], E-mail: gmliusy@gmail.com; Wu Hongchun; Cao Liangzhi [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2008-09-15

    This paper presents a transmission probability method (TPM) to solve the neutron transport equation in three-dimensional triangular-z geometry. The source within the mesh is assumed to be spatially uniform and isotropic. At the mesh surface, the constant and the simplified P{sub 1} approximation are invoked for the anisotropic angular flux distribution. Based on this model, a code TPMTDT is encoded. It was verified by three 3D Takeda benchmark problems, in which the first two problems are in XYZ geometry and the last one is in hexagonal-z geometry, and an unstructured geometry problem. The results of the present method agree well with those of Monte-Carlo calculation method and Spherical Harmonics (P{sub N}) method.

  13. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  14. Probability distribution of distance in a uniform ellipsoid: Theory and applications to physics

    International Nuclear Information System (INIS)

    Parry, Michelle; Fischbach, Ephraim

    2000-01-01

    A number of authors have previously found the probability P n (r) that two points uniformly distributed in an n-dimensional sphere are separated by a distance r. This result greatly facilitates the calculation of self-energies of spherically symmetric matter distributions interacting by means of an arbitrary radially symmetric two-body potential. We present here the analogous results for P 2 (r;ε) and P 3 (r;ε) which respectively describe an ellipse and an ellipsoid whose major and minor axes are 2a and 2b. It is shown that for ε=(1-b 2 /a 2 ) 1/2 ≤1, P 2 (r;ε) and P 3 (r;ε) can be obtained as an expansion in powers of ε, and our results are valid through order ε 4 . As an application of these results we calculate the Coulomb energy of an ellipsoidal nucleus, and compare our result to an earlier result quoted in the literature. (c) 2000 American Institute of Physics

  15. Row—column visibility graph approach to two-dimensional landscapes

    International Nuclear Information System (INIS)

    Xiao Qin; Pan Xue; Li Xin-Li; Stephen Mutua; Yang Hui-Jie; Jiang Yan; Wang Jian-Yong; Zhang Qing-Jun

    2014-01-01

    A new concept, called the row—column visibility graph, is proposed to map two-dimensional landscapes to complex networks. A cluster coverage is introduced to describe the extensive property of node clusters on a Euclidean lattice. Graphs mapped from fractals generated with the probability redistribution model behave scale-free. They have pattern-induced hierarchical organizations and comparatively much more extensive structures. The scale-free exponent has a negative correlation with the Hurst exponent, however, there is no deterministic relation between them. Graphs for fractals generated with the midpoint displacement model are exponential networks. When the Hurst exponent is large enough (e.g., H > 0.5), the degree distribution decays much more slowly, the average coverage becomes significant large, and the initially hierarchical structure at H < 0.5 is destroyed completely. Hence, the row—column visibility graph can be used to detect the pattern-related new characteristics of two-dimensional landscapes. (interdisciplinary physics and related areas of science and technology)

  16. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  17. R.f.-induced steps in mutually coupled, two-dimensional distributed Josephson tunnel junctions

    International Nuclear Information System (INIS)

    Klein, U.; Dammschneider, P.

    1991-01-01

    This paper reports on the amplitudes of the current steps in the I-V characteristics of mutually coupled two-dimensional distributed Josephson tunnel junctions driven by microwaves. For this purpose we use a numerical computation algorithm based on a planar resonator model for the individual Josephson tunnel junctions to calculate the d.c. current density distribution. In addition to the fundamental microwave frequency, harmonic contents of the tunneling current are also considered. The lateral dimensions of the individual junctions are small compared to the microwave wavelength and the Josephson penetration depth, giving an almost constant current density distribution. Therefore, the coupled junctions can give much greater step amplitudes than a single junction with an equal tunneling area, because of their nonuniform current density distribution

  18. A Bernstein-Von Mises Theorem for discrete probability distributions

    OpenAIRE

    Boucheron, S.; Gassiat, E.

    2008-01-01

    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0 on ℕ∖{0} and a sequence of truncation levels (kn)n satisfying kn3≤ninf i≤knθ0(i). Let θ̂ denote the maximum likelihood estimate of (θ0(i))i≤kn and let Δn(θ0) denote the kn-dimensional vector which i-th coordinate is defined by $\\sqrt{n}(\\hat{\\theta}_{n}(i)-\\theta_{0}(i))$ for 1≤i≤kn. We check that under mild ...

  19. Three-dimensional mapping of light transmittance and foliage distribution using lidar

    International Nuclear Information System (INIS)

    Todd, K.W.; Csillag, F.; Atkinson, P.M.

    2003-01-01

    The horizontal and vertical distributions of light transmittance were evaluated as a function of foliage distribution using lidar (light detection and ranging) observations for a sugar maple (Acer saccharum) stand in the Turkey Lakes Watershed. Along the vertical profile of vegetation, horizontal slices of probability of light transmittance were derived from an Optech ALTM 1225 instrument's return pulses (two discrete, 15-cm diameter returns) using indicator kriging. These predictions were compared with (i) below canopy (1-cm spatial resolution) transect measurements of the fraction of photosynthetically active radiation (FPAR) and (ii) measurements of tree height. A first-order trend was initially removed from the lidar returns. The vertical distribution of vegetation height was then sliced into nine percentiles and indicator variograms were fitted to them. Variogram parameters were found to vary as a function of foliage height above ground. In this paper, we show that the relationship between ground measurements of FPAR and kriged estimates of vegetation cover becomes stronger and tighter at coarser spatial resolutions. Three-dimensional maps of foliage distribution were computed as stacks of the percentile probability surfaces. These probability surfaces showed correspondence with individual tree-based observations and provided a much more detailed characterization of quasi-continuous foliage distribution. These results suggest that discrete-return lidar provides a promising technology to capture variations of foliage characteristics in forests to support the development of functional linkages between biophysical and ecological studies. (author)

  20. Three-dimensional quantum key distribution in the presence of several eavesdroppers

    International Nuclear Information System (INIS)

    Daoud, M; Ez-zahraouy, H

    2011-01-01

    Quantum key distribution based on encoding in three-dimensional systems in the presence of several eavesdroppers is proposed. This extends the BB84 protocol in the presence of many eavesdroppers where two-level quantum systems (qubits) are replaced by three-level systems (qutrits). We discuss the scenarios involving two, three and four complementary bases. We derive the explicit form of Alice and Bob mutual information and the information gained by each eavesdropper. In particular, we show that, in the presence of only one eavesdropper, the protocol involving four bases is safer than the other ones. However, for two eavesdroppers, the security is strongly dependent on the attack probabilities. The effect of a large number of eavesdroppers is also investigated.

  1. Three-dimensional quantum key distribution in the presence of several eavesdroppers

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M [Max Planck Institute for the Physics of Complex Systems, Dresden (Germany); Ez-zahraouy, H, E-mail: daoud@pks.mpg.de, E-mail: ezahamid@fsr.ac.m [LMPHE (URAC), Faculty of Sciences, University Mohammed V-Agdal, Rabat (Morocco)

    2011-10-15

    Quantum key distribution based on encoding in three-dimensional systems in the presence of several eavesdroppers is proposed. This extends the BB84 protocol in the presence of many eavesdroppers where two-level quantum systems (qubits) are replaced by three-level systems (qutrits). We discuss the scenarios involving two, three and four complementary bases. We derive the explicit form of Alice and Bob mutual information and the information gained by each eavesdropper. In particular, we show that, in the presence of only one eavesdropper, the protocol involving four bases is safer than the other ones. However, for two eavesdroppers, the security is strongly dependent on the attack probabilities. The effect of a large number of eavesdroppers is also investigated.

  2. Two-dimensional topological field theories coupled to four-dimensional BF theory

    International Nuclear Information System (INIS)

    Montesinos, Merced; Perez, Alejandro

    2008-01-01

    Four-dimensional BF theory admits a natural coupling to extended sources supported on two-dimensional surfaces or string world sheets. Solutions of the theory are in one to one correspondence with solutions of Einstein equations with distributional matter (cosmic strings). We study new (topological field) theories that can be constructed by adding extra degrees of freedom to the two-dimensional world sheet. We show how two-dimensional Yang-Mills degrees of freedom can be added on the world sheet, producing in this way, an interactive (topological) theory of Yang-Mills fields with BF fields in four dimensions. We also show how a world sheet tetrad can be naturally added. As in the previous case the set of solutions of these theories are contained in the set of solutions of Einstein's equations if one allows distributional matter supported on two-dimensional surfaces. These theories are argued to be exactly quantizable. In the context of quantum gravity, one important motivation to study these models is to explore the possibility of constructing a background-independent quantum field theory where local degrees of freedom at low energies arise from global topological (world sheet) degrees of freedom at the fundamental level

  3. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  4. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  5. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  6. Chain end distribution of block copolymer in two-dimensional microphase-separated structure studied by scanning near-field optical microscopy.

    Science.gov (United States)

    Sekine, Ryojun; Aoki, Hiroyuki; Ito, Shinzaburo

    2009-10-01

    The chain end distribution of a block copolymer in a two-dimensional microphase-separated structure was studied by scanning near-field optical microscopy (SNOM). In the monolayer of poly(octadecyl methacrylate)-block-poly(isobutyl methacrylate) (PODMA-b-PiBMA), the free end of the PiBMA subchain was directly observed by SNOM, and the spatial distributions of the whole block and the chain end are examined and compared with the convolution of the point spread function of the microscope and distribution function of the model structures. It was found that the chain end distribution of the block copolymer confined in two dimensions has a peak near the domain center, being concentrated in the narrower region, as compared with three-dimensional systems.

  7. Two-dimensional distribution of carbon nanotubes in copper flake powders

    Energy Technology Data Exchange (ETDEWEB)

    Tan Zhanqiu; Li Zhiqiang; Fan Genlian; Li Wenhuan; Liu Qinglei; Zhang Wang; Zhang Di, E-mail: lizhq@sjtu.edu.cn, E-mail: zhangdi@sjtu.edu.cn [State Key Laboratory of Metal Matrix Composites, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2011-06-03

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  8. Two-dimensional distribution of carbon nanotubes in copper flake powders

    International Nuclear Information System (INIS)

    Tan Zhanqiu; Li Zhiqiang; Fan Genlian; Li Wenhuan; Liu Qinglei; Zhang Wang; Zhang Di

    2011-01-01

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  9. Two-dimensional distribution of carbon nanotubes in copper flake powders.

    Science.gov (United States)

    Tan, Zhanqiu; Li, Zhiqiang; Fan, Genlian; Li, Wenhuan; Liu, Qinglei; Zhang, Wang; Zhang, Di

    2011-06-03

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  10. Test of quantum thermalization in the two-dimensional transverse-field Ising model.

    Science.gov (United States)

    Blaß, Benjamin; Rieger, Heiko

    2016-12-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems.

  11. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    International Nuclear Information System (INIS)

    Xu, Xin-Ping; Ide, Yusuke

    2016-01-01

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  12. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn [School of Physical Science and Technology, Soochow University, Suzhou 215006 (China); Ide, Yusuke [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa, 221-8686 (Japan)

    2016-10-15

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  13. Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval

    Science.gov (United States)

    Smith, Jeffrey Scott

    2006-01-01

    Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.

  14. Two-dimensional sub-half-wavelength atom localization via controlled spontaneous emission.

    Science.gov (United States)

    Wan, Ren-Gang; Zhang, Tong-Yi

    2011-12-05

    We propose a scheme for two-dimensional (2D) atom localization based on the controlled spontaneous emission, in which the atom interacts with two orthogonal standing-wave fields. Due to the spatially dependent atom-field interaction, the position probability distribution of the atom can be directly determined by measuring the resulting spontaneously emission spectrum. The phase sensitive property of the atomic system leads to quenching of the spontaneous emission in some regions of the standing-waves, which significantly reduces the uncertainty in the position measurement of the atom. We find that the frequency measurement of the emitted light localizes the atom in half-wavelength domain. Especially the probability of finding the atom at a particular position can reach 100% when a photon with certain frequency is detected. By increasing the Rabi frequencies of the driving fields, such 2D sub-half-wavelength atom localization can acquire high spatial resolution.

  15. Spin dynamics in a two-dimensional quantum gas

    DEFF Research Database (Denmark)

    Pedersen, Poul Lindholm; Gajdacz, Miroslav; Deuretzbacher, Frank

    2014-01-01

    We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions with superimp......We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions...

  16. Approximate solutions for the two-dimensional integral transport equation. Solution of complex two-dimensional transport problems

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two parts: the first part deals with the solution of complex two-dimensional transport problems, the second one (note CEA-N-2166) treats the critically mixed methods of resolution. A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the interface current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding, and water, or homogenized structural material. The cells are divided into zones that are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is effected by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: CALLIOPE uses a cylindrical cell model and one or three terms for the flux expansion, and NAUSICAA uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes, one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark problems and by calculations performed in the APOLLO multigroup code [fr

  17. Quasi-integrability and two-dimensional QCD

    International Nuclear Information System (INIS)

    Abdalla, E.; Mohayaee, R.

    1996-10-01

    The notion of integrability in two-dimensional QCD is discussed. We show that in spite of an infinite number of conserved charges, particle production is not entirely suppressed. This phenomenon, which we call quasi-integrability, is explained in terms of quantum corrections to the combined algebra of higher-conserved and spectrum-generating currents. We predict the qualitative form of particle production probabilities and verify that they are in agreement with numerical data. We also discuss four-dimensional self-dual Yang-Mills theory in the light of our results. (author). 25 refs, 4 figs, 1 tab

  18. Protein-protein interaction site predictions with three-dimensional probability distributions of interacting atoms on protein surfaces.

    Directory of Open Access Journals (Sweden)

    Ching-Tai Chen

    Full Text Available Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins and were tested on an independent dataset (consisting of 142 proteins. The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted

  19. Protein-Protein Interaction Site Predictions with Three-Dimensional Probability Distributions of Interacting Atoms on Protein Surfaces

    Science.gov (United States)

    Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with

  20. An analytical calculation of neighbourhood order probabilities for high dimensional Poissonian processes and mean field models

    International Nuclear Information System (INIS)

    Tercariol, Cesar Augusto Sangaletti; Kiipper, Felipe de Moura; Martinez, Alexandre Souto

    2007-01-01

    Consider that the coordinates of N points are randomly generated along the edges of a d-dimensional hypercube (random point problem). The probability P (d,N) m,n that an arbitrary point is the mth nearest neighbour to its own nth nearest neighbour (Cox probabilities) plays an important role in spatial statistics. Also, it has been useful in the description of physical processes in disordered media. Here we propose a simpler derivation of Cox probabilities, where we stress the role played by the system dimensionality d. In the limit d → ∞, the distances between pair of points become independent (random link model) and closed analytical forms for the neighbourhood probabilities are obtained both for the thermodynamic limit and finite-size system. Breaking the distance symmetry constraint drives us to the random map model, for which the Cox probabilities are obtained for two cases: whether a point is its own nearest neighbour or not

  1. The mean distance to the nth neighbour in a uniform distribution of random points: an application of probability theory

    International Nuclear Information System (INIS)

    Bhattacharyya, Pratip; Chakrabarti, Bikas K

    2008-01-01

    We study different ways of determining the mean distance (r n ) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating (r n ). Next, we describe two alternative means of deriving the exact expression of (r n ): we review the method using absolute probability and develop an alternative method using conditional probability. Finally, we obtain an approximation to (r n ) from the mean volume between the reference point and its nth neighbour and compare it with the heuristic and exact results

  2. Investigation of Real-Time Two-Dimensional Visualization of Fuel Spray Liquid/Vapor Distribution via Exciplex Fluorescence.

    Science.gov (United States)

    1987-08-30

    EXCIPLEX FLUORESCENCE ~N 0FINAL REPORT 00 JAMES F. VERDIECK AND ARTHUR A. ROTUNNO UNITED TECHNOLOGIES RESEARCH CENTER 0 AND LYNN A. MELTON D I UNIVERSITY...DOCUMENTATION. "NWA 0. INVESTIGATION OF REAL-TINE TWO-DIMENSIONAL VISUALIZATION OF FUEL SPRAY LIQUID/VAPOR DISTRIBUTION VIA EXCIPLEX FLUORESCENCE FINAL...Spray Liquid/Vapor Distribution Via Exciplex Fluorescen , - 12. PERSONAL AUTHOR(S) J. F. Yeardierk. A- A. Rnriiunn-l L_ A. Millo - 13a TYPE OF REPORT

  3. Four-dimensional symmetry from a broad viewpoint. II Invariant distribution of quantized field oscillators and questions on infinities

    Science.gov (United States)

    Hsu, J. P.

    1983-01-01

    The foundation of the quantum field theory is changed by introducing a new universal probability principle into field operators: one single inherent and invariant probability distribution P(/k/) is postulated for boson and fermion field oscillators. This can be accomplished only when one treats the four-dimensional symmetry from a broad viewpoint. Special relativity is too restrictive to allow such a universal probability principle. A radical length, R, appears in physics through the probability distribution P(/k/). The force between two point particles vanishes when their relative distance tends to zero. This appears to be a general property for all forces and resembles the property of asymptotic freedom. The usual infinities in vacuum fluctuations and in local interactions, however complicated they may be, are all removed from quantum field theories. In appendix A a simple finite and unitary theory of unified electroweak interactions is discussed without assuming Higgs scalar bosons.

  4. Test of quantum thermalization in the two-dimensional transverse-field Ising model

    Science.gov (United States)

    Blaß, Benjamin; Rieger, Heiko

    2016-01-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523

  5. First operation of a powerful FEL with two-dimensional distributed feedback

    CERN Document Server

    Agarin, N V; Bobylev, V B; Ginzburg, N S; Ivanenko, V G; Kalinin, P V; Kuznetsov, S A; Peskov, N Yu; Sergeev, A S; Sinitsky, S L; Stepanov, V D

    2000-01-01

    A W-band (75 GHz) FEL of planar geometry driven by a sheet electron beam was realised using the pulse accelerator ELMI (0.8 MeV/3 kA/5 mu s). To provide the spatial coherence of radiation from different parts of the electron beam with a cross-section of 0.4x12 cm two-dimensional distributed feedback systems have been employed using a 2-D Bragg resonator of planar geometry. The resonator consisted of two 2-D Bragg reflectors separated by a regular waveguide section. The total energy in the microwave pulse of microsecond duration was 100 J corresponding to a power of approx 100 MW. The main component of the FEL radiation spectrum was at 75 GHz that corresponded to the zone of effective Bragg reflection found from 'cold' microwave testing of the resonator. The experimental data compared well with the results of theoretical analysis.

  6. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  7. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    Science.gov (United States)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  8. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  9. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  10. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  11. Alternate two-dimensional quantum walk with a single-qubit coin

    International Nuclear Information System (INIS)

    Di Franco, C.; Busch, Th.; Mc Gettrick, M.; Machida, T.

    2011-01-01

    We have recently proposed a two-dimensional quantum walk where the requirement of a higher dimensionality of the coin space is substituted with the alternance of the directions in which the walker can move [C. Di Franco, M. Mc Gettrick, and Th. Busch, Phys. Rev. Lett. 106, 080502 (2011)]. For a particular initial state of the coin, this walk is able to perfectly reproduce the spatial probability distribution of the nonlocalized case of the Grover walk. Here, we present a more detailed proof of this equivalence. We also extend the analysis to other initial states in order to provide a more complete picture of our walk. We show that this scheme outperforms the Grover walk in the generation of x-y spatial entanglement for any initial condition, with the maximum entanglement obtained in the case of the particular aforementioned state. Finally, the equivalence is generalized to wider classes of quantum walks and a limit theorem for the alternate walk in this context is presented.

  12. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  13. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  14. The Visualization of the Space Probability Distribution for a Particle Moving in a Double Ring-Shaped Coulomb Potential

    Directory of Open Access Journals (Sweden)

    Yuan You

    2018-01-01

    Full Text Available The analytical solutions to a double ring-shaped Coulomb potential (RSCP are presented. The visualizations of the space probability distribution (SPD are illustrated for the two- (contour and three-dimensional (isosurface cases. The quantum numbers (n,l,m are mainly relevant for those quasi-quantum numbers (n′,l′,m′ via the double RSCP parameter c. The SPDs are of circular ring shape in spherical coordinates. The properties for the relative probability values (RPVs P are also discussed. For example, when we consider the special case (n,l,m=(6,5,0, the SPD moves towards two poles of z-axis when P increases. Finally, we discuss the different cases for the potential parameter b, which is taken as negative and positive values for c>0. Compared with the particular case b=0, the SPDs are shrunk for b=-0.5, while they are spread out for b=0.5.

  15. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  16. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  17. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  18. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  19. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  20. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  1. Two-dimensional simulation of sintering process

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Pinto, Lucio Carlos Martins; Vasconcelos, Wander L.

    1996-01-01

    The results of two-dimensional simulations are directly applied to systems in which one of the dimensions is much smaller than the others, and to sections of three dimensional models. Moreover, these simulations are the first step of the analysis of more complex three-dimensional systems. In this work, two basic features of the sintering process are studied: the types of particle size distributions related to the powder production processes and the evolution of geometric parameters of the resultant microstructures during the solid-state sintering. Random packing of equal spheres is considered in the sintering simulation. The packing algorithm does not take into account the interactive forces between the particles. The used sintering algorithm causes the densification of the particle set. (author)

  2. Quantum Fourier transform, Heisenberg groups and quasi-probability distributions

    International Nuclear Information System (INIS)

    Patra, Manas K; Braunstein, Samuel L

    2011-01-01

    This paper aims to explore the inherent connection between Heisenberg groups, quantum Fourier transform (QFT) and (quasi-probability) distribution functions. Distribution functions for continuous and finite quantum systems are examined from three perspectives and all of them lead to Weyl-Gabor-Heisenberg groups. The QFT appears as the intertwining operator of two equivalent representations arising out of an automorphism of the group. Distribution functions correspond to certain distinguished sets in the group algebra. The marginal properties of a particular class of distribution functions (Wigner distributions) arise from a class of automorphisms of the group algebra of the Heisenberg group. We then study the reconstruction of the Wigner function from the marginal distributions via inverse Radon transform giving explicit formulae. We consider some applications of our approach to quantum information processing and quantum process tomography.

  3. Three-dimensional coupled double-distribution-function lattice ...

    Indian Academy of Sciences (India)

    Ruo-Fan Qiu

    2017-11-14

    Nov 14, 2017 ... Abstract. Two three-dimensional (3D) lattice Boltzmann models in the framework of coupled double-distribution- function approach for compressible flows, in which specific-heat ratio and Prandtl number can be adjustable, are developed in this paper. The main differences between the two models are ...

  4. Snow-melt flood frequency analysis by means of copula based 2D probability distributions for the Narew River in Poland

    Directory of Open Access Journals (Sweden)

    Bogdan Ozga-Zielinski

    2016-06-01

    New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.

  5. Exact critical properties of two-dimensional polymer networks from conformal invariance

    International Nuclear Information System (INIS)

    Duplantier, B.

    1988-03-01

    An infinity of exact critical exponents for two-dimensional self-avoiding walks can be derived from conformal invariance and Coulomb gas techniques applied to the O(n) model and to the Potts model. They apply to polymer networks of any topology, for which a general scaling theory is given, valid in any dimension d. The infinite set of exponents has also been calculated to O(ε 2 ), for d=4-ε. The 2D study also includes other universality classes like the dense polymers, the Hamiltonian walks, the polymers at their θ-point. Exact correlation functions can be further given for Hamiltonian walks, and exact winding angle probability distributions for the self-avoiding walks

  6. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  7. Two dimensional electron transport in disordered and ordered distributions of magnetic flux vortices

    International Nuclear Information System (INIS)

    Nielsen, M.; Hedegaard, P.

    1994-04-01

    We have considered the conductivity properties of a two dimensional electron gas (2DEG) in two different kinds of inhomogeneous magnetic fields, i.e. a disordered distribution of magnetic flux vortices, and a periodic array of magnetic flux vortices. The work falls in two parts. In the first part we show how the phase shifts for an electron scattering on an isolated vortex, can be calculated analytically, and related to the transport properties through the differential cross section. In the second part we present numerical results for the Hall conductivity of the 2DEG in a periodic array of flux vortices found by exact diagonalization. We find characteristic spikes in the Hall conductance, when it is plotted against the filling fraction. It is argued that the spikes can be interpreted in terms of ''topological charge'' piling up across local and global gaps in the energy spectrum. (au) (23 refs.)

  8. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  9. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  10. Probability representations of a class of two-way diffusions

    Energy Technology Data Exchange (ETDEWEB)

    Clifford, P.; Green, N.J.P. [Department of Statistics, University of Oxford, Oxford (United Kingdom); Feng, J.F. [COGS, Sussex University, Brighton (United Kingdom); Wei, G. [Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong (China)

    2002-07-19

    There has been little progress in the analysis of two-way diffusion in the last few decades due to the difficulties brought by the interface section similar to a free boundary condition. In this paper, however, the equivalent probability model is considered and the interface section is precisely described by an integral equation. The solution of two-way diffusion is then expressed in an integral form with the integrand being the solution of a classical first passage time model and the solution of a one-dimensional integral equation which is relatively easier to solve. The exact expression of the two-way diffusion enables us to find the explicit solution of the model with infinite horizontal boundaries and without drifting. (author)

  11. Two-dimensional atom localization based on coherent field controlling in a five-level M-type atomic system.

    Science.gov (United States)

    Jiang, Xiangqian; Li, Jinjiang; Sun, Xiudong

    2017-12-11

    We study two-dimensional sub-wavelength atom localization based on the microwave coupling field controlling and spontaneously generated coherence (SGC) effect. For a five-level M-type atom, introducing a microwave coupling field between two upper levels and considering the quantum interference between two transitions from two upper levels to lower levels, the analytical expression of conditional position probability (CPP) distribution is obtained using the iterative method. The influence of the detuning of a spontaneously emitted photon, Rabi frequency of the microwave field, and the SGC effect on the CPP are discussed. The two-dimensional sub-half-wavelength atom localization with high-precision and high spatial resolution is achieved by adjusting the detuning and the Rabi frequency, where the atom can be localized in a region smaller thanλ/10×λ/10. The spatial resolution is improved significantly compared with the case without the microwave field.

  12. Dynamics of vortex interactions in two-dimensional flows

    DEFF Research Database (Denmark)

    Juul Rasmussen, J.; Nielsen, A.H.; Naulin, V.

    2002-01-01

    The dynamics and interaction of like-signed vortex structures in two dimensional flows are investigated by means of direct numerical solutions of the two-dimensional Navier-Stokes equations. Two vortices with distributed vorticity merge when their distance relative to their radius, d/R-0l. is below...... a critical value, a(c). Using the Weiss-field, a(c) is estimated for vortex patches. Introducing an effective radius for vortices with distributed vorticity, we find that 3.3 ... is effectively producing small scale structures and the relation to the enstrophy "cascade" in developed 2D turbulence is discussed. The influence of finite viscosity on the merging is also investigated. Additionally, we examine vortex interactions on a finite domain, and discuss the results in connection...

  13. Library of subroutines to produce one- and two-dimensional statistical distributions on the ES-1010 computer

    International Nuclear Information System (INIS)

    Vzorov, I.K.; Ivanov, V.V.

    1978-01-01

    A library of subroutines to produce 1- and 2-dimensional distribution on the ES-1010 computer is described. 1-dimensional distribution is represented as the histogram, 2-dimensional one is represented as the table. The library provides such opportunities as booking and deleting, filling and clearing histograms (tables), arithmetic operations with them, and printing histograms (tables) on the computer printer with variable printer line. All subroutines are written in FORTRAN-4 language and can be called from the program written in FORTRAN or in ASSEMBLER. This library can be implemented on all computer systems that offer a FORTRAN-4 compiler

  14. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  15. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  16. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  17. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  18. Topology of two-dimensional turbulent flows of dust and gas

    Science.gov (United States)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, StDNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  19. A Tale of Two Probabilities

    Science.gov (United States)

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  20. Three-dimensional analytic probabilities of coupled vibrational-rotational-translational energy transfer for DSMC modeling of nonequilibrium flows

    International Nuclear Information System (INIS)

    Adamovich, Igor V.

    2014-01-01

    A three-dimensional, nonperturbative, semiclassical analytic model of vibrational energy transfer in collisions between a rotating diatomic molecule and an atom, and between two rotating diatomic molecules (Forced Harmonic Oscillator–Free Rotation model) has been extended to incorporate rotational relaxation and coupling between vibrational, translational, and rotational energy transfer. The model is based on analysis of semiclassical trajectories of rotating molecules interacting by a repulsive exponential atom-to-atom potential. The model predictions are compared with the results of three-dimensional close-coupled semiclassical trajectory calculations using the same potential energy surface. The comparison demonstrates good agreement between analytic and numerical probabilities of rotational and vibrational energy transfer processes, over a wide range of total collision energies, rotational energies, and impact parameter. The model predicts probabilities of single-quantum and multi-quantum vibrational-rotational transitions and is applicable up to very high collision energies and quantum numbers. Closed-form analytic expressions for these transition probabilities lend themselves to straightforward incorporation into DSMC nonequilibrium flow codes

  1. Two-dimensional electron density characterisation of arc interruption phenomenon in current-zero phase

    Science.gov (United States)

    Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko

    2018-01-01

    Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.

  2. Two Dimensional Finite Element Model to Study Calcium Distribution in Oocytes

    Science.gov (United States)

    Naik, Parvaiz Ahmad; Pardasani, Kamal Raj

    2015-06-01

    Cytosolic free calcium concentration is a key regulatory factor and perhaps the most widely used means of controlling cellular function. Calcium can enter cells through different pathways which are activated by specific stimuli including membrane depolarization, chemical signals and calcium depletion of intracellular stores. One of the important components of oocyte maturation is differentiation of the Ca2+ signaling machinery which is essential for egg activation after fertilization. Eggs acquire the ability to produce the fertilization-specific calcium signal during oocyte maturation. The calcium concentration patterns required during different stages of oocyte maturation are still not completely known. Also the mechanisms involved in calcium dynamics in oocyte cell are still not well understood. In view of above a two dimensional FEM model has been proposed to study calcium distribution in an oocyte cell. The parameters such as buffers, ryanodine receptor, SERCA pump and voltage gated calcium channel are incorporated in the model. Based on the biophysical conditions the initial and boundary conditions have been framed. The model is transformed into variational form and Ritz finite element method has been employed to obtain the solution. A program has been developed in MATLAB 7.10 for the entire problem and executed to obtain numerical results. The numerical results have been used to study the effect of buffers, RyR, SERCA pump and VGCC on calcium distribution in an oocyte cell.

  3. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  4. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  5. Optimizing separations in online comprehensive two-dimensional liquid chromatography.

    Science.gov (United States)

    Pirok, Bob W J; Gargano, Andrea F G; Schoenmakers, Peter J

    2018-01-01

    Online comprehensive two-dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two-dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two-dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high-molecular-weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one-dimensional liquid chromatography, two-dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two-dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two-dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two-dimensional liquid chromatography separations. © 2017 The Authors. Journal of Separation Science published by WILEY-VCH Verlag GmbH & Co. KGaA.

  6. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  7. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  8. PHOTOMETRIC REDSHIFT PROBABILITY DISTRIBUTIONS FOR GALAXIES IN THE SDSS DR8

    International Nuclear Information System (INIS)

    Sheldon, Erin S.; Cunha, Carlos E.; Mandelbaum, Rachel; Brinkmann, J.; Weaver, Benjamin A.

    2012-01-01

    We present redshift probability distributions for galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 8 imaging data. We used the nearest-neighbor weighting algorithm to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8 and u < 29.0. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five-dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training-set redshifts. We derived P(z)'s for individual objects by using training-set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release. The newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary sources of error in the N(z) reconstruction to be sample variance and spectroscopic failures: The training sets are drawn from relatively small volumes of space, and some samples have large incompleteness. Using simulations we estimated the uncertainty in N(z) due to sample variance at a given redshift to be ∼10%-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS Web site.

  9. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  10. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  11. The area distribution of two-dimensional random walks and non-Hermitian Hofstadter quantum mechanics

    International Nuclear Information System (INIS)

    Matveenko, Sergey; Ouvry, Stéphane

    2014-01-01

    When random walks on a square lattice are biased horizontally to move solely to the right, the probability distribution of their algebraic area can be obtained exactly (Mashkevich and Ouvry 2009 J. Stat. Phys. 137 71). We explicitly map this biased classical random system onto a non-Hermitian Hofstadter-like quantum model where a charged particle on a square lattice coupled to a perpendicular magnetic field hops only to the right. For the commensurate case, when the magnetic flux per unit cell is rational, an exact solution of the quantum model is obtained. The periodicity of the lattice allows one to relate traces of the Nth power of the Hamiltonian to probability distribution generating functions of biased walks of length N. (paper)

  12. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  13. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  14. A Fokker-Planck-Landau collision equation solver on two-dimensional velocity grid and its application to particle-in-cell simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, E. S.; Chang, C. S., E-mail: cschang@pppl.gov [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States); Korea Advanced Institute of Science and Technology, Yuseong-gu, DaeJeon 305-701 (Korea, Republic of)

    2014-03-15

    An approximate two-dimensional solver of the nonlinear Fokker-Planck-Landau collision operator has been developed using the assumption that the particle probability distribution function is independent of gyroangle in the limit of strong magnetic field. The isotropic one-dimensional scheme developed for nonlinear Fokker-Planck-Landau equation by Buet and Cordier [J. Comput. Phys. 179, 43 (2002)] and for linear Fokker-Planck-Landau equation by Chang and Cooper [J. Comput. Phys. 6, 1 (1970)] have been modified and extended to two-dimensional nonlinear equation. In addition, a method is suggested to apply the new velocity-grid based collision solver to Lagrangian particle-in-cell simulation by adjusting the weights of marker particles and is applied to a five dimensional particle-in-cell code to calculate the neoclassical ion thermal conductivity in a tokamak plasma. Error verifications show practical aspects of the present scheme for both grid-based and particle-based kinetic codes.

  15. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  16. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  17. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  19. ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES

    Directory of Open Access Journals (Sweden)

    Nikola Stefanović

    2007-06-01

    Full Text Available In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic leadership style, leadership theory researchers use two dimensional matrices. The two-dimensional matrices define leadership styles on the basis of different parameters. By using these parameters, one can identify two-dimensional styles.

  20. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  1. Probabilities of filaments in a Poissonian distribution of points -I

    International Nuclear Information System (INIS)

    Betancort-Rijo, J.

    1989-01-01

    Statistical techniques are devised to assess the likelihood of a Poisson sample of points in two and three dimensions, containing specific filamentary structures. For that purpose, the expression of Otto et al (1986. Astrophys. J., 304) for the probability density of clumps in a Poissonian distribution of points is generalized for any value of the density contrast. A way of counting filaments differing from that of Otto et al. is proposed, because at low density contrast the filaments counted by Otto et al. are distributed in a clumpy fashion, each clump of filaments corresponding to a distinct observed filament. (author)

  2. Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.

  3. Spectral line shapes in linear absorption and two-dimensional spectroscopy with skewed frequency distributions

    NARCIS (Netherlands)

    Farag, Marwa H.; Hoenders, Bernhard J.; Knoester, Jasper; Jansen, Thomas L. C.

    2017-01-01

    The effect of Gaussian dynamics on the line shapes in linear absorption and two-dimensional correlation spectroscopy is well understood as the second-order cumulant expansion provides exact spectra. Gaussian solvent dynamics can be well analyzed using slope line analysis of two-dimensional

  4. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  5. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  6. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  7. Diffusive transport in a one dimensional disordered potential involving correlations

    International Nuclear Information System (INIS)

    Monthus, C.; Paris-6 Univ., 75

    1995-03-01

    Transport properties of one dimensional Brownian diffusion under the influence of a quenched random force, distributed as a two-level Poisson process is discussed. Large time scaling laws of the position of the Brownian particle, and the probability distribution of the stationary flux going through a sample between two prescribed concentrations are studied. (author) 14 refs.; 3 figs

  8. Dressed-state analysis of efficient two-dimensional atom localization in a four-level atomic system

    International Nuclear Information System (INIS)

    Wang, Zhiping; Yu, Benli

    2014-01-01

    We investigate two-dimensional atom localization via spontaneous emission in a four-level atomic system. It is found that the detection probability and precision of two-dimensional atom localization can be significantly improved due to the interference effect between the spontaneous decay channels and the dynamically induced quantum interference generated by the probe and composite fields. More importantly, a 100% probability of finding an atom within the sub-half-wavelength domain of the standing waves can be reached when the corresponding conditions are satisfied. As a result, our scheme may be helpful in laser cooling or atom nano-lithography via atom localization. (paper)

  9. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  10. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  11. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  12. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  13. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.

    Science.gov (United States)

    Chan, Stephanie C Y; Niv, Yael; Norman, Kenneth A

    2016-07-27

    The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. Our world is governed by hidden (latent) causes that we cannot observe, but which generate the observations we see. A range of high-level cognitive processes require inference of a probability distribution (or "belief distribution") over the possible latent causes that might be generating our current observations. This is true for reinforcement learning and decision making (where the latent cause comprises the true "state" of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or "schema"). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex, an area that has been separately implicated in the representations of both states and schemas. Copyright © 2016 the authors 0270-6474/16/367817-12$15.00/0.

  14. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  15. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  16. Transient two-dimensional flow in porous media

    International Nuclear Information System (INIS)

    Sharpe, L. Jr.

    1979-01-01

    The transient flow of an isothermal ideal gas from the cavity formed by an underground nuclear explosion is investigated. A two-dimensional finite element method is used in analyzing the gas flow. Numerical results of the pressure distribution are obtained for both the stemming column and the surrounding porous media

  17. Two-dimensional NMR spectrometry

    International Nuclear Information System (INIS)

    Farrar, T.C.

    1987-01-01

    This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t 0 ; an evolution period, t 1 ; and a detection period, t 2

  18. Laser sheet dropsizing based on two-dimensional Raman and Mie scattering.

    Science.gov (United States)

    Malarski, Anna; Schürer, Benedikt; Schmitz, Ingo; Zigan, Lars; Flügel, Alexandre; Leipertz, Alfred

    2009-04-01

    The imaging and quantification of droplet sizes in sprays is a challenging task for optical scientists and engineers. Laser sheet dropsizing (LSDS) combines the two-dimensional information of two different optical processes, one that is proportional to the droplet volume and one that depends on the droplet surface, e.g., Mie scattering. Besides Mie scattering, here we use two-dimensional Raman scattering as the volume-dependent measurement technique. Two different calibration strategies are presented and discussed. Two-dimensional droplet size distributions in a spray have been validated in comparison with the results of point-resolved phase Doppler anemometry (PDA) measurements.

  19. Laser sheet dropsizing based on two-dimensional Raman and Mie scattering

    International Nuclear Information System (INIS)

    Malarski, Anna; Schuerer, Benedikt; Schmitz, Ingo; Zigan, Lars; Fluegel, Alexandre; Leipertz, Alfred

    2009-01-01

    The imaging and quantification of droplet sizes in sprays is a challenging task for optical scientists and engineers. Laser sheet dropsizing (LSDS) combines the two-dimensional information of two different optical processes, one that is proportional to the droplet volume and one that depends on the droplet surface, e.g., Mie scattering. Besides Mie scattering, here we use two-dimensional Raman scattering as the volume-dependent measurement technique. Two different calibration strategies are presented and discussed. Two-dimensional droplet size distributions in a spray have been validated in comparison with the results of point-resolved phase Doppler anemometry (PDA) measurements

  20. Two-dimensional goodness-of-fit testing in astronomy

    International Nuclear Information System (INIS)

    Peacock, J.A

    1983-01-01

    This paper deals with the techniques available to test for consistency between the empirical distribution of data points on a plane and a hypothetical density law. Two new statistical tests are developed. The first is a two-dimensional version of the Kolmogorov-Smirnov test, for which the distribution of the test statistic is investigated using a Monte Carlo method. This test is found in practice to be very nearly distribution-free, and empirical formulae for the confidence levels are given. Secondly, the method of power-spectrum analysis is extended to deal with cases in which the null hypothesis is not a uniform distribution. These methods are illustrated by application to the distribution of quasar candidates found on an objective-prism plate of the Virgo Cluster. (author)

  1. Warranty menu design for a two-dimensional warranty

    International Nuclear Information System (INIS)

    Ye, Zhi-Sheng; Murthy, D.N. Pra

    2016-01-01

    Fierce competitions in the commercial product market have forced manufacturers to provide customer-friendly warranties with a view to achieving higher customer satisfaction and increasing the market share. This study proposes a strategy that offers customers a two-dimensional warranty menu with a number of warranty choices, called a flexible warranty policy. We investigate the design of a flexible two-dimensional warranty policy that contains a number of rectangular regions. This warranty policy is obtained by dividing customers into several groups according to their use rates and providing each group a germane warranty region. Consumers choose a favorable one from the menu according to their usage behaviors. Evidently, this flexible warranty policy is attractive to users of different usage behaviors, and thus, it gives the manufacturer a good position in advertising the product. When consumers are unaware about their use rates upon purchase, we consider a fixed two-dimensional warranty policy with a stair-case warranty region and show that it is equivalent to the flexible policy. Such an equivalence reveals the inherent relationship between the rectangular warranty policy, the L-shape warranty policy, the step-stair warranty policy and the iso-probability of failure warranty policy that were extensively discussed in the literature. - Highlights: • We design a two-dimensional warranty menu with a number of warranty choices. • Consumers can choose a favorable one from the menu as per their usage behavior. • We further consider a fixed 2D warranty policy with a stair-case warranty region. • We show the equivalence of the two warranty policies.

  2. Probability, statistics, and associated computing techniques

    International Nuclear Information System (INIS)

    James, F.

    1983-01-01

    This chapter attempts to explore the extent to which it is possible for the experimental physicist to find optimal statistical techniques to provide a unique and unambiguous quantitative measure of the significance of raw data. Discusses statistics as the inverse of probability; normal theory of parameter estimation; normal theory (Gaussian measurements); the universality of the Gaussian distribution; real-life resolution functions; combination and propagation of uncertainties; the sum or difference of 2 variables; local theory, or the propagation of small errors; error on the ratio of 2 discrete variables; the propagation of large errors; confidence intervals; classical theory; Bayesian theory; use of the likelihood function; the second derivative of the log-likelihood function; multiparameter confidence intervals; the method of MINOS; least squares; the Gauss-Markov theorem; maximum likelihood for uniform error distribution; the Chebyshev fit; the parameter uncertainties; the efficiency of the Chebyshev estimator; error symmetrization; robustness vs. efficiency; testing of hypotheses (e.g., the Neyman-Pearson test); goodness-of-fit; distribution-free tests; comparing two one-dimensional distributions; comparing multidimensional distributions; and permutation tests for comparing two point sets

  3. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  4. Design of a rotational three-dimensional nonimaging device by a compensated two-dimensional design process.

    Science.gov (United States)

    Yang, Yi; Qian, Ke-Yuan; Luo, Yi

    2006-07-20

    A compensation process has been developed to design rotational three-dimensional (3D) nonimaging devices. By compensating the desired light distribution during a two-dimensional (2D) design process for an extended Lambertian source using a compensation coefficient, the meridian plane of a 3D device with good performance can be obtained. This method is suitable in many cases with fast calculation speed. Solutions to two kinds of optical design problems have been proposed, and the limitation of this compensated 2D design method is discussed.

  5. Fermion emission in a two-dimensional black hole space-time

    International Nuclear Information System (INIS)

    Wanders, G.

    1994-01-01

    We investigate massless fermion production by a two-dimensional dilatonic black hole. Our analysis is based on the Bogoliubov transformation relating the outgoing fermion field observed outside the black hole horizon to the incoming field present before the black hole creation. It takes full account of the fact that the transformation is neither invertible nor unitarily implementable. The particle content of the outgoing radiation is specified by means of inclusive probabilities for the detection of sets of outgoing fermions and antifermions in given states. For states localized near the horizon these probabilities characterize a thermal equilibrium state. The way the probabilities become thermal as one approaches the horizon is discussed in detail

  6. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  7. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  8. Statistical thermodynamics of a two-dimensional relativistic gas.

    Science.gov (United States)

    Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood

    2009-03-01

    In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).

  9. A new method for the determination of peak distribution across a two-dimensional separation space for the identification of optimal column combinations.

    Science.gov (United States)

    Leonhardt, Juri; Teutenberg, Thorsten; Buschmann, Greta; Gassner, Oliver; Schmidt, Torsten C

    2016-11-01

    For the identification of the optimal column combinations, a comparative orthogonality study of single columns and columns coupled in series for the first dimension of a microscale two-dimensional liquid chromatographic approach was performed. In total, eight columns or column combinations were chosen. For the assessment of the optimal column combination, the orthogonality value as well as the peak distributions across the first and second dimension was used. In total, three different methods of orthogonality calculation, namely the Convex Hull, Bin Counting, and Asterisk methods, were compared. Unfortunately, the first two methods do not provide any information of peak distribution. The third method provides this important information, but is not optimal when only a limited number of components are used for method development. Therefore, a new concept for peak distribution assessment across the separation space of two-dimensional chromatographic systems and clustering detection was developed. It could be shown that the Bin Counting method in combination with additionally calculated histograms for the respective dimensions is well suited for the evaluation of orthogonality and peak clustering. The newly developed method could be used generally in the assessment of 2D separations. Graphical Abstract ᅟ.

  10. Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.

    Science.gov (United States)

    Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel

    2014-01-01

    Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.

  11. Calculation of probabilities of rotational transitions of two-atom molecules in the collision with heavy particles

    International Nuclear Information System (INIS)

    Vargin, A.N.; Ganina, N.A.; Konyukhov, V.K.; Selyakov, V.I.

    1975-01-01

    The problem of calculation of collisional probabilities of rotational transitions (CPRT) in molecule-molecule and molecule-atom interactions in a three-dimensional space has been solved in this paper. A quasiclassical approach was used. The calculation of collisional probabilities of rotational transitions trajectory was carried out in the following way. The particle motion trajectory was calculated by a classical method and the time dependence of the perturbation operator was obtained, its averaging over wave functions of initial and finite states produced CPRT. The classical calculation of the molecule motion trajectory was justified by triviality of the de Broglie wavelength, compared with characteristic atomic distances, and by triviality of a transfered rotational quantum compared with the energy of translational motion of particles. The results of calculation depend on the chosen interaction potential of collisional particles. It follows from the Messy criterion that the region of nonadiabaticity of interaction may be compared with internuclear distances of a molecule. Therefore, for the description of the interaction a short-range potential is required. Analytical expressions were obtained appropriate for practical calculations for one- and two-quantum rotational transitions of diatomic molecules. The CPRT was averaged over the Maxwell distribution over velocities and analytical dependences on a gas temperature were obtained. The results of the numerical calculation of probabilities for the HCl-HCl, HCl-He, CO-CO interactions are presented to illustrate the method

  12. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  13. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  14. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  15. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  16. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    Science.gov (United States)

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  17. Random distribution of nucleoli in metabolic cells

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, R.J.; Waterman, M.S.

    1977-01-01

    Hasofer (1974) has studied a probabilistic model for the fusion of nucleoli in metabolic cells. The nucleoli are uniformly distributed at points in the nucleus, assumed to be a sphere. The nucleoli grow from a point to a maximum size during interphase, and fusion is said to occur if the nucleoli touch. For this model, Hasofer calculated the probability of fusion and found it much smaller than experimental data would indicate. Experimental data of this type is taken by use of a microscope where a two-dimensional view or projection of the three-dimensional cell is obtained. Hasofer implicitly assumes that actual fusion can be distinguished from the case where the two nucleoli do not touch but their two-dimensional projections overlap. It is assumed, in this letter, that these two cases cannot be distinguished. The probability obtained by Beckman and Waterman is larger than Hasofer's and a much better fit to the experimental data is obtained. Even if true fusion can be unfailingly distinguished from overlap of the two-dimensional projections, it is hoped that these calculations will allow someone to propose the correct (non-uniform) model. It is concluded, for the assumptions used, that there is not sufficient evidence to reject the hypothesis of uniform distribution of the nucleoli.

  18. Wave packet fractional revivals in a one-dimensional Rydberg atom

    International Nuclear Information System (INIS)

    Veilande, Rita; Bersons, Imants

    2007-01-01

    We investigate many characteristic features of revival and fractional revival phenomena via derived analytic expressions for an autocorrelation function of a one-dimensional Rydberg atom with weighting probabilities modelled by a Gaussian or a Lorentzian distribution. The fractional revival phenomenon in the ionization probabilities of a one-dimensional Rydberg atom irradiated by two short half-cycle pulses is also studied. When many states are involved in the formation of the wave packet, the revival is lower and broader than the initial wave packet and the fractional revivals overlap and disappear with time

  19. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  20. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    Science.gov (United States)

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Mode selection in two-dimensional Bragg resonators based on planar dielectric waveguides

    International Nuclear Information System (INIS)

    Baryshev, V R; Ginzburg, N S; Zaslavskii, V Yu; Malkin, A M; Sergeev, A S; Thumm, M

    2009-01-01

    Two-dimensional Bragg resonators based on planar dielectric waveguides are analysed. It is shown that the doubly periodic corrugation deposited on the dielectric surface in the form of two gratings with translational vectors directed perpendicular to each other ensures effective selection of modes along two coordinates at large Fresnel parameters. This result is obtained both by the method of coupled waves (geometrical optics approximation) and by the direct numerical simulations. Two-dimensional Bragg resonators make it possible to fabricate two-dimensional distributed feedback lasers and to provide generation of spatially coherent radiation in large-volume active media. (waveguides)

  2. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  3. Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.

  4. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    Science.gov (United States)

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  5. Two-dimensional unsteady lift problems in supersonic flight

    Science.gov (United States)

    Heaslet, Max A; Lomax, Harvard

    1949-01-01

    The variation of pressure distribution is calculated for a two-dimensional supersonic airfoil either experiencing a sudden angle-of-attack change or entering a sharp-edge gust. From these pressure distributions the indicial lift functions applicable to unsteady lift problems are determined for two cases. Results are presented which permit the determination of maximum increment in lift coefficient attained by an unrestrained airfoil during its flight through a gust. As an application of these results, the minimum altitude for safe flight through a specific gust is calculated for a particular supersonic wing of given strength and wing loading.

  6. Two-Dimensional Homogeneous Fermi Gases

    Science.gov (United States)

    Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning

    2018-02-01

    We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.

  7. Analysis of biopsy outcome after three-dimensional conformal radiation therapy of prostate cancer using dose-distribution variables and tumor control probability models

    International Nuclear Information System (INIS)

    Levegruen, Sabine; Jackson, Andrew; Zelefsky, Michael J.; Venkatraman, Ennapadam S.; Skwarchuk, Mark W.; Schlegel, Wolfgang; Fuks, Zvi; Leibel, Steven A.; Ling, C. Clifton

    2000-01-01

    Purpose: To investigate tumor control following three-dimensional conformal radiation therapy (3D-CRT) of prostate cancer and to identify dose-distribution variables that correlate with local control assessed through posttreatment prostate biopsies. Methods and Material: Data from 132 patients, treated at Memorial Sloan-Kettering Cancer Center (MSKCC), who had a prostate biopsy 2.5 years or more after 3D-CRT for T1c-T3 prostate cancer with prescription doses of 64.8-81 Gy were analyzed. Variables derived from the dose distribution in the PTV included: minimum dose (Dmin), maximum dose (Dmax), mean dose (Dmean), dose to n% of the PTV (Dn), where n = 1%, ..., 99%. The concept of the equivalent uniform dose (EUD) was evaluated for different values of the surviving fraction at 2 Gy (SF 2 ). Four tumor control probability (TCP) models (one phenomenologic model using a logistic function and three Poisson cell kill models) were investigated using two sets of input parameters, one for low and one for high T-stage tumors. Application of both sets to all patients was also investigated. In addition, several tumor-related prognostic variables were examined (including T-stage, Gleason score). Univariate and multivariate logistic regression analyses were performed. The ability of the logistic regression models (univariate and multivariate) to predict the biopsy result correctly was tested by performing cross-validation analyses and evaluating the results in terms of receiver operating characteristic (ROC) curves. Results: In univariate analysis, prescription dose (Dprescr), Dmax, Dmean, dose to n% of the PTV with n of 70% or less correlate with outcome (p 2 : EUD correlates significantly with outcome for SF 2 of 0.4 or more, but not for lower SF 2 values. Using either of the two input parameters sets, all TCP models correlate with outcome (p 2 , is limited because the low dose region may not coincide with the tumor location. Instead, for MSKCC prostate cancer patients with their

  8. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  9. Two-dimensional void reconstruction by neutron transmission

    International Nuclear Information System (INIS)

    Zakaib, G.D.; Harms, A.A.; Vlachopoulos, J.

    1978-01-01

    Contemporary algebraic reconstruction methods are utilized in investigating the two-dimensional void distribution in a water analog from neutron transmission measurements. It is sought to ultimately apply these techniques to the determination of time-averaged void distribution in two-phase flow systems as well as for potential usage in neutron radiography. Initially, projection data were obtained from a digitized model of a hypothetical two-phase representation and later from neutron beam traverses across a voided methacrylate plastic model. From 10 to 15 views were incorporated, and decoupling of overlapped measurements was utilized to afford greater resolution. In general, the additive Algebraic Reconstruction Technique yielded the best reconstructions, with others showing promise for noisy data. Results indicate the need for some further development of the method in interpreting real data

  10. Numerical Resolution of N-dimensional Fokker-Planck stochastic equations; Resolucion Numerica de Ecuaciones Estocasticas de tipo Fokker-Planck en Varias Dimensiones

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Olivares, R A; Munoz Roldan, A

    1992-07-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. The input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetic, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (Author) 16 refs.

  11. Feynman quasi probability distribution for spin-(1/2), and its generalizations

    International Nuclear Information System (INIS)

    Colucci, M.

    1999-01-01

    It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects

  12. Influence of dose distribution homogeneity on the tumor control probability in heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Wen Xiaoqiong; Li Qiang; Zhou Guangming; Li Wenjian; Wei Zengquan

    2001-01-01

    In order to estimate the influence of the un-uniform dose distribution on the clinical treatment result, the Influence of dose distribution homogeneity on the tumor control probability was investigated. Basing on the formula deduced previously for survival fraction of cells irradiated by the un-uniform heavy-ion irradiation field and the theory of tumor control probability, the tumor control probability was calculated for a tumor mode exposed to different dose distribution homogeneity. The results show that the tumor control probability responding to the same total dose will decrease if the dose distribution homogeneity gets worse. In clinical treatment, the dose distribution homogeneity should be better than 95%

  13. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  14. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  15. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  16. Spatial distribution of ozone density in pulsed corona discharges observed by two-dimensional laser absorption method

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Ryo; Oda, Tetsuji [Department of Electrical Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 (Japan)

    2004-03-07

    The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 {mu}m spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 {mu}s after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O{sub 2} + M {yields} O{sub 3} + M, is estimated to be 3.5 x 10{sup -34} cm{sup 6} s{sup -1}. It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm{sup 2} s{sup -1}.

  17. Spatial distribution of ozone density in pulsed corona discharges observed by two-dimensional laser absorption method

    International Nuclear Information System (INIS)

    Ono, Ryo; Oda, Tetsuji

    2004-01-01

    The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 μm spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 μs after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O 2 + M → O 3 + M, is estimated to be 3.5 x 10 -34 cm 6 s -1 . It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm 2 s -1

  18. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Smirnov Vladimir Alexandrovich

    2012-10-01

    Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

  19. Excluding joint probabilities from quantum theory

    Science.gov (United States)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  20. Two-dimensional characterization of atmospheric profile retrievals from limb sounding observations

    International Nuclear Information System (INIS)

    Worden, J.R.; Bowman, K.W.; Jones, D.B.

    2004-01-01

    Limb sounders measure atmospheric radiation that is dependent on atmospheric temperature and constituents that have a radial and angular distribution in Earth-centered coordinates. In order to evaluate the sensitivity of a limb retrieval to radial and angular distributions of trace gas concentrations, we perform and characterize one-dimensional (vertical) and two-dimensional (radial and angular) atmospheric profile retrievals. Our simulated atmosphere for these retrievals is a distribution of carbon monoxide (CO), which represents a plume off the coast of south-east Asia. Both the one-dimensional (1D) and two-dimensional (2D) limb retrievals are characterized by evaluating their averaging kernels and error covariances on a radial and angular grid that spans the plume. We apply this 2D characterization of a limb retrieval to a comparison of the 2D retrieval with the 1D (vertical) retrieval. By characterizing a limb retrieval in two dimensions the location of the air mass where the retrievals are most sensitive can be determined. For this test case the retrievals are most sensitive to the CO concentrations about 2 deg.latitude in front of the tangent point locations. We find the information content for the 2D retrieval is an order of magnitude larger and the degrees of freedom is about a factor of two larger than that of the 1D retrieval primarily because the 2D retrieval can estimate angular distributions of CO concentrations. This 2D characterization allows the radial and angular resolution as well as the degrees of freedom and information content to be computed for these limb retrievals. We also use the 2D averaging kernel to develop a strategy for validation of a limb retrieval with an in situ measurement

  1. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  2. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  3. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B

    2007-01-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed

  4. Probability as a conceptual hurdle to understanding one-dimensional quantum scattering and tunnelling

    International Nuclear Information System (INIS)

    Domert, Daniel; Linder, Cedric; Ingerman, Ake

    2005-01-01

    This paper draws on part of a larger project looking at university students' learning difficulties associated with quantum mechanics. Here an unexpected and interesting aspect was brought to the fore while students were discussing a computer simulation of one-dimensional quantum scattering and tunnelling. In these explanations the most dominant conceptual hurdle that emerged in the students' explanations was centred around the notion of probability. To explore this further, categories of description of the variation in the understanding of probability were constituted. The analysis reported is done in terms of the various facets of probability encountered in the simulation and characterizes dynamics of this conceptual hurdle to appropriate understanding of the scattering and tunnelling process. Pedagogical implications are discussed

  5. Generalization of Poisson distribution for the case of changing probability of consequential events

    International Nuclear Information System (INIS)

    Kushnirenko, E.

    1995-01-01

    The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation

  6. Study on two-dimensional distribution of X-ray image based on improved Elman algorithm

    International Nuclear Information System (INIS)

    Wang, Fang; Wang, Ming-Yuan; Tian, Feng-Shuo; Liu, Yu-Fang; Li, Lei; Zhao, Jing

    2015-01-01

    The principle of the X-ray detector which can simultaneously perform the measurement of the exposure rate and 2D (two-dimensional) distribution is described. A commercially available CMOS image sensor has been adopted as the key part to receive X-ray without any scintillators. The correlation between the pixel value (PV) and the absorbed exposure rate of X-ray is studied using the improved Elman neural network. Comparing the optimal adjustment process of the BP (Back Propagation) neural network and the improved Elman neural network, the neural network parameters are selected based on the fitting curve and the error curve. The experiments using the practical production data show that the proposed method achieves high accurate predictions to 10 −15 , which is consistent with the anticipated value. It is proven that it is possible to detect the exposure rate using the X-ray detector with the improved Elman algorithm for its advantages of fast converges and smooth error curve. - Highlights: • A method to measure the X-ray radiation with low cost and miniaturization. • A general CMOS image sensor is used to detect X-ray. • The system can measure exposure rate and 2D distribution simultaneously. • The Elman algorithm is adopted to improve the precision of the radiation detector

  7. Numerical resolution of N-dimensional Fokker-Plank stochastic equations

    International Nuclear Information System (INIS)

    Garcia-Olivares, A.; Muoz, A.

    1992-01-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. the input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetics, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (author) 21 fig. 16 ref

  8. Numerical Resolution of N-dimensional Fokker-Planck stochastic equations

    International Nuclear Information System (INIS)

    Garcia-Olivares, R. A.; Munoz Roldan, A.

    1992-01-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. The input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetic, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (Author) 16 refs

  9. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  10. Efficiency of the estimators of multivariate distribution parameters from the one-dimensional observed frequencies

    International Nuclear Information System (INIS)

    Chernov, N.I.; Kurbatov, V.S.; Ososkov, G.A.

    1988-01-01

    Parameter estimation for multivariate probability distributions is studied in experiments where data are presented as one-dimensional hystograms. For this model a statistics defined as a quadratic form of the observed frequencies which has a limitig x 2 -distribution is proposed. The efficiency of the estimator minimizing the value of that statistics is proved whithin the class of all unibased estimates obtained via minimization of quadratic forms of observed frequencies. The elaborated method was applied to the physical problem of analysis of the secondary pion energy distribution in the isobar model of pion-nucleon interactions with the production of an additional pion. The numerical experiments showed that the accuracy of estimation is twice as much if comparing the conventional methods

  11. Comparison between two bivariate Poisson distributions through the ...

    African Journals Online (AJOL)

    These two models express themselves by their probability mass function. ... To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive.

  12. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

    International Nuclear Information System (INIS)

    Chu Guanquan; Wang Jinhui

    2012-01-01

    Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

  13. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  14. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  15. Three-dimensional rail-current distribution near the armature of simple, square-bore, two-rail railguns

    International Nuclear Information System (INIS)

    Beno, J.H.

    1991-01-01

    In this paper vector potential is solved as a three dimensional, boundary value problem for a conductor geometry consisting of square-bore railgun rails and a stationary armature. Conductors are infinitely conducting and perfect contact is assumed between rails and the armature. From the vector potential solution, surface current distribution is inferred

  16. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  17. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  18. Micromachined two dimensional resistor arrays for determination of gas parameters

    NARCIS (Netherlands)

    van Baar, J.J.J.; Verwey, Willem B.; Dijkstra, Mindert; Dijkstra, Marcel; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; Krijnen, Gijsbertus J.M.; Elwenspoek, Michael Curt

    A resistive sensor array is presented for two dimensional temperature distribution measurements in a micromachined flow channel. This allows simultaneous measurement of flow velocity and fluid parameters, like thermal conductivity, diffusion coefficient and viscosity. More general advantages of

  19. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  20. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  1. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.

    2017-09-07

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  2. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.; Keyes, David E.; Turkiyyah, George

    2017-01-01

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  3. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  4. Two-slit experiment: quantum and classical probabilities

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2015-01-01

    Inter-relation between quantum and classical probability models is one of the most fundamental problems of quantum foundations. Nowadays this problem also plays an important role in quantum technologies, in quantum cryptography and the theory of quantum random generators. In this letter, we compare the viewpoint of Richard Feynman that the behavior of quantum particles cannot be described by classical probability theory with the viewpoint that quantum–classical inter-relation is more complicated (cf, in particular, with the tomographic model of quantum mechanics developed in detail by Vladimir Man'ko). As a basic example, we consider the two-slit experiment, which played a crucial role in quantum foundational debates at the beginning of quantum mechanics (QM). In particular, its analysis led Niels Bohr to the formulation of the principle of complementarity. First, we demonstrate that in complete accordance with Feynman's viewpoint, the probabilities for the two-slit experiment have the non-Kolmogorovian structure, since they violate one of basic laws of classical probability theory, the law of total probability (the heart of the Bayesian analysis). However, then we show that these probabilities can be embedded in a natural way into the classical (Kolmogorov, 1933) probability model. To do this, one has to take into account the randomness of selection of different experimental contexts, the joint consideration of which led Feynman to a conclusion about the non-classicality of quantum probability. We compare this embedding of non-Kolmogorovian quantum probabilities into the Kolmogorov model with well-known embeddings of non-Euclidean geometries into Euclidean space (e.g., the Poincaré disk model for the Lobachvesky plane). (paper)

  5. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  6. An axial calculation method for accurate two-dimensional PWR core simulation

    International Nuclear Information System (INIS)

    Grimm, P.

    1985-02-01

    An axial calculation method, which improves the agreement of the multiplication factors determined by two- and three-dimensional PWR neutronic calculations, is presented. The axial buckling is determined at each time point so as to reproduce the increase of the leakage due to the flattening of the axial power distribution and the effect of the axial variation of the group constants of the fuel on the reactivity is taken into account. The results of a test example show that the differences of k-eff and cycle length between two- and three-dimensional calculations, which are unsatisfactorily large if a constant buckling is used, become negligible if the results of the axial calculation are used in the two-dimensional core simulation. (Auth.)

  7. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  8. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  9. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  10. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  11. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  12. Scarred resonances and steady probability distribution in a chaotic microcavity

    International Nuclear Information System (INIS)

    Lee, Soo-Young; Rim, Sunghwan; Kim, Chil-Min; Ryu, Jung-Wan; Kwon, Tae-Yoon

    2005-01-01

    We investigate scarred resonances of a stadium-shaped chaotic microcavity. It is shown that two components with different chirality of the scarring pattern are slightly rotated in opposite ways from the underlying unstable periodic orbit, when the incident angles of the scarring pattern are close to the critical angle for total internal reflection. In addition, the correspondence of emission pattern with the scarring pattern disappears when the incident angles are much larger than the critical angle. The steady probability distribution gives a consistent explanation about these interesting phenomena and makes it possible to expect the emission pattern in the latter case

  13. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  14. Lagrangian statistics in weakly forced two-dimensional turbulence.

    Science.gov (United States)

    Rivera, Michael K; Ecke, Robert E

    2016-01-01

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.

  15. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  16. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  17. Two-dimensional quantum repeaters

    Science.gov (United States)

    Wallnöfer, J.; Zwerger, M.; Muschik, C.; Sangouard, N.; Dür, W.

    2016-11-01

    The endeavor to develop quantum networks gave rise to a rapidly developing field with far-reaching applications such as secure communication and the realization of distributed computing tasks. This ultimately calls for the creation of flexible multiuser structures that allow for quantum communication between arbitrary pairs of parties in the network and facilitate also multiuser applications. To address this challenge, we propose a two-dimensional quantum repeater architecture to establish long-distance entanglement shared between multiple communication partners in the presence of channel noise and imperfect local control operations. The scheme is based on the creation of self-similar multiqubit entanglement structures at growing scale, where variants of entanglement swapping and multiparty entanglement purification are combined to create high-fidelity entangled states. We show how such networks can be implemented using trapped ions in cavities.

  18. Equivalence of two-dimensional gravities

    International Nuclear Information System (INIS)

    Mohammedi, N.

    1990-01-01

    The authors find the relationship between the Jackiw-Teitelboim model of two-dimensional gravity and the SL(2,R) induced gravity. These are shown to be related to a two-dimensional gauge theory obtained by dimensionally reducing the Chern-Simons action of the 2 + 1 dimensional gravity. The authors present an explicit solution to the equations of motion of the auxiliary field of the Jackiw-Teitelboim model in the light-cone gauge. A renormalization of the cosmological constant is also given

  19. Fast chemical reaction in two-dimensional Navier-Stokes flow: initial regime.

    Science.gov (United States)

    Ait-Chaalal, Farid; Bourqui, Michel S; Bartello, Peter

    2012-04-01

    This paper studies an infinitely fast bimolecular chemical reaction in a two-dimensional biperiodic Navier-Stokes flow. The reactants in stoichiometric quantities are initially segregated by infinite gradients. The focus is placed on the initial stage of the reaction characterized by a well-defined one-dimensional material contact line between the reactants. Particular attention is given to the effect of the diffusion κ of the reactants. This study is an idealized framework for isentropic mixing in the lower stratosphere and is motivated by the need to better understand the effect of resolution on stratospheric chemistry in climate-chemistry models. Adopting a Lagrangian straining theory approach, we relate theoretically the ensemble mean of the length of the contact line, of the gradients along it, and of the modulus of the time derivative of the space-average reactant concentrations (here called the chemical speed) to the joint probability density function of the finite-time Lyapunov exponent λ with two times τ and τ[over ̃]. The time 1/λ measures the stretching time scale of a Lagrangian parcel on a chaotic orbit up to a finite time t, while τ measures it in the recent past before t, and τ[over ̃] in the early part of the trajectory. We show that the chemical speed scales like κ(1/2) and that its time evolution is determined by rare large events in the finite-time Lyapunov exponent distribution. The case of smooth initial gradients is also discussed. The theoretical results are tested with an ensemble of direct numerical simulations (DNSs) using a pseudospectral model.

  20. Two-dimensional metamaterial optics

    International Nuclear Information System (INIS)

    Smolyaninov, I I

    2010-01-01

    While three-dimensional photonic metamaterials are difficult to fabricate, many new concepts and ideas in the metamaterial optics can be realized in two spatial dimensions using planar optics of surface plasmon polaritons. In this paper we review recent progress in this direction. Two-dimensional photonic crystals, hyperbolic metamaterials, and plasmonic focusing devices are demonstrated and used in novel microscopy and waveguiding schemes

  1. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  2. Probability distribution of long-run indiscriminate felling of trees in ...

    African Journals Online (AJOL)

    The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

  3. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... of the station’s infrastructure layout and plan of operation. However, these three methods do not take the timetable at the station into consideration. Therefore, two methods are introduced in this paper, making it possible to estimate the robustness of different timetables at a station or different...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time...

  4. Anisotropic Defect-Mediated Melting of Two-Dimensional Colloidal Crystals

    Science.gov (United States)

    Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.

    2004-09-01

    The melting transition of anisotropic two-dimensional (2D) crystals is studied in a model system of superparamagnetic colloids. The anisotropy of the induced dipole-dipole interaction is varied by tilting the external magnetic field off the normal to the particle plane. By analyzing the time-dependent Lindemann parameter as well as translational and orientational order we observe a 2D smecticlike phase. The Kosterlitz-Thouless-Halperin-Nelson-Young scenario of isotropic melting is modified: dislocation pairs and dislocations appear with different probabilities depending on their orientation with respect to the in-plane field.

  5. On the Geometrical Characteristics of Three-Dimensional Wireless Ad Hoc Networks and Their Applications

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available In a wireless ad hoc network, messages are transmitted, received, and forwarded in a finite geometrical region and the transmission of messages is highly dependent on the locations of the nodes. Therefore the study of geometrical relationship between nodes in wireless ad hoc networks is of fundamental importance in the network architecture design and performance evaluation. However, most previous works concentrated on the networks deployed in the two-dimensional region or in the infinite three-dimensional space, while in many cases wireless ad hoc networks are deployed in the finite three-dimensional space. In this paper, we analyze the geometrical characteristics of the three-dimensional wireless ad hoc network in a finite space in the framework of random graph and deduce an expression to calculate the distance probability distribution between network nodes that are independently and uniformly distributed in a finite cuboid space. Based on the theoretical result, we present some meaningful results on the finite three-dimensional network performance, including the node degree and the max-flow capacity. Furthermore, we investigate some approximation properties of the distance probability distribution function derived in the paper.

  6. A two-dimensional model with three regions for the reflooding study

    International Nuclear Information System (INIS)

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de

    1982-01-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the length of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  7. A two-dimensional model with three regions for the reflooding study

    International Nuclear Information System (INIS)

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de.

    1983-02-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the lenght of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  8. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  9. Field in field technique in two-dimensional planning for whole brain irradiation; Tecnica field in field em planejamentos bidimensionais para irradiacao de cerebro total

    Energy Technology Data Exchange (ETDEWEB)

    Castro, A.L.S.; Campos, T.P.R., E-mail: radioterapia.andre@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte (Brazil). Departamento de Engenharia Nuclear

    2016-11-01

    Radiotherapy is the most used clinical method used for brain metastases treatment, the most frequent secondary tumors provided by breast, lung and melanomas as primary origin. The protocols often use high daily doses and, depending on the irradiation technique there is high probability of complications in health tissues. In order to minimize adverse effects, it is important the dosimetric analysis of three-dimensional radiotherapy planning through tomographic images or, concerning to the 2D simulations, by the application of techniques that optimize dose distribution by increasing the homogeneity. The study aimed to compare the 2D and 3D conformal planning for total brain irradiation in a individual equivalent situation and evaluate the progress of these planning applying the field in field technique. The methodology consisted of simulating a two-dimensional planning, reproduce it on a set of tomographic images and compare it with the conformal plan for two fields and four fields (field in field). The results showed no significant difference between 2D and 3D planning for whole brain irradiation, and the field in field technique significantly improved the dose distribution in brain volume compared with two fields for the proposal situation. As conclusion, the two-dimensional plane for the four fields described was viable for whole brain irradiation in the treatment of brain metastases at the proposal situation. (author)

  10. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  11. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  12. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  13. Two-dimensional quantum key distribution (QKD) protocol for increased key rate fiber-based quantum communications

    DEFF Research Database (Denmark)

    da Lio, Beatrice; Bacco, Davide; Ding, Yunhong

    2017-01-01

    We experimentally prove a novel two-dimensional QKD scheme, relying on differential phasetime shifting (DPTS) of strongly attenuated weak coherent pulses. We demonstrate QKD transmission up to 170 km standard fiber, and even include a classical channel up to 90 km.......We experimentally prove a novel two-dimensional QKD scheme, relying on differential phasetime shifting (DPTS) of strongly attenuated weak coherent pulses. We demonstrate QKD transmission up to 170 km standard fiber, and even include a classical channel up to 90 km....

  14. Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)

    Science.gov (United States)

    Fan, Mark S.; Christou, Aris; Pecht, Michael G.

    1992-01-01

    Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.

  15. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  16. Two-dimensional versus three-dimensional treatment planning of tangential breast irradiation

    International Nuclear Information System (INIS)

    Damen, E.M.F.; Bruinvis, I.A.D.; Mijnheer, B.J.

    1995-01-01

    Purpose: Full three-dimensional (3-D) treatment planning requires 3-D patient contours and density information, derived either from CT scanning or from other 3-D contouring methods. These contouring techniques are time consuming, and are often not available or cannot be used. Two-dimensional (2-D) treatment planning can be performed using only a few patient contours, made with much simpler techniques, in combination with simulator images for estimating the lung position. In order to investigate the need for full 3-D planning, we compared the performance of both a 2-D and a 3-D planning system in calculating absolute dose values and relative dose distributions in tangential breast irradiation. Methods: Two breast-shaped phantoms were used in this study. The first phantom consists of a polyethylene mould, filled with water and cork to mimic the lung. An ionization chamber can be inserted in the phantom at fixed positions. The second phantom is made of 25 transverse slices of polystyrene and cork, made with a computerized milling machine from CT information. In this phantom, films can be inserted in three sagittal planes. Both phantoms have been irradiated with two tangential 8 MV photon beams. The measured dose distribution has been compared with the dose distribution predicted by the two planning systems. Results: In the central plane, the 3-D planning system predicts the absolute dose with an accuracy of 0.5 - 4%. The dose at the isocentre of the beams agrees within 0.5% with the measured dose. The 2-D system predicts the dose with an accuracy of 0.9 - 3%. The dose calculated at the isocentre is 2.6% higher than the measured dose, because missing lateral scatter is not taken into account in this planning system. In off-axis planes, the calculated absolute dose agrees with the measured dose within 4% for the 2-D system and within 6% for the 3-D system. However, the relative dose distribution is predicted better by the 3-D planning system. Conclusions: This study

  17. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  18. p-adic probability prediction of correlations between particles in the two-slit and neutron interferometry experiments

    International Nuclear Information System (INIS)

    Khrennikov, A.

    1998-01-01

    The Author start from Feynman's idea to use negative probabilities to describe the two-slit experiment and other quantum interference experiments. Formally by using negative probability distributions the Author can explain the results of the two-slit experiment on the basis of the pure corpuscular picture of quantum mechanics. However, negative probabilities are absurd objects in the framework of the standard Kolmogorov theory of probability. The Author present a large class of non-Kolmogorovean probability models where negative probabilities are well defined on the frequency basis. These are models with probabilities which belong to the so-called field of p-adic numbers. However, these models are characterized by correlations between trails. Therefore, the Author predict correlations between particles in interference experiments. In fact, the predictions are similar to the predictions of the so-called nonen ergodic interpretation of quantum mechanics, which was proposed by V. Buonomano. The Author propose the concrete experiments (in particular, in the framework of the neutron interferometry) to verify our predictions on the correlations

  19. Field analysis of two-dimensional focusing grating

    OpenAIRE

    Borsboom, P.P.; Frankena, H.J.

    1995-01-01

    The method that we have developed [P-P. Borsboom, Ph.D. dissertation (Delft University of Technology, Delft, The Netherlands); P-P. Borsboom and H. J. Frankena, J. Opt. Soc. Am. A 12, 1134–1141 (1995)] is successfully applied to a two-dimensional focusing grating coupler. The field in the focal region has been determined for symmetrical chirped gratings consisting of as many as 124 corrugations. The intensity distribution in the focal region agrees well with the approximate predictions of geo...

  20. Acoustic transparency in two-dimensional sonic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Dehesa, Jose; Torrent, Daniel [Wave Phenomena Group, Department of Electronic Engineering, Polytechnic University of Valencia, C/ Camino de Vera s/n, E-46022 Valencia (Spain); Cai Liangwu [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS 66506 (United States)], E-mail: jsdehesa@upvnet.upv.es

    2009-01-15

    Acoustic transparency is studied in two-dimensional sonic crystals consisting of hexagonal distributions of cylinders with continuously varying properties. The transparency condition is achieved by selectively closing the acoustic bandgaps, which are governed by the structure factor of the cylindrical scatterers. It is shown here that cylindrical scatterers with the proposed continuously varying properties are physically realizable by using metafluids based on sonic crystals. The feasibility of this proposal is analyzed by a numerical experiment based on multiple scattering theory.

  1. Crustal geomagnetic field - Two-dimensional intermediate-wavelength spatial power spectra

    Science.gov (United States)

    Mcleod, M. G.

    1983-01-01

    Two-dimensional Fourier spatial power spectra of equivalent magnetization values are presented for a region that includes a large portion of the western United States. The magnetization values were determined by inversion of POGO satellite data, assuming a magnetic crust 40 km thick, and were located on an 11 x 10 array with 300 km grid spacing. The spectra appear to be in good agreement with values of the crustal geomagnetic field spatial power spectra given by McLeod and Coleman (1980) and with the crustal field model given by Serson and Hannaford (1957). The spectra show evidence of noise at low frequencies in the direction along the satellite orbital track (N-S). indicating that for this particular data set additional filtering would probably be desirable. These findings illustrate the value of two-dimensional spatial power spectra both for describing the geomagnetic field statistically and as a guide for diagnosing possible noise sources.

  2. Cooperation in two-dimensional mixed-games

    International Nuclear Information System (INIS)

    Amaral, Marco A; Silva, Jafferson K L da; Wardil, Lucas

    2015-01-01

    Evolutionary game theory is a common framework to study the evolution of cooperation, where it is usually assumed that the same game is played in all interactions. Here, we investigate a model where the game that is played by two individuals is uniformly drawn from a sample of two different games. Using the master equation approach we show that the random mixture of two games is equivalent to play the average game when (i) the strategies are statistically independent of the game distribution and (ii) the transition rates are linear functions of the payoffs. We also use Monte-Carlo simulations in a two-dimensional lattice and mean-field techniques to investigate the scenario when the two above conditions do not hold. We find that even outside of such conditions, several quantities characterizing the mixed-games are still the same as the ones obtained in the average game when the two games are not very different. (paper)

  3. New family of probability distributions with applications to Monte Carlo studies

    International Nuclear Information System (INIS)

    Johnson, M.E.; Tietjen, G.L.; Beckman, R.J.

    1980-01-01

    A new probability distribution is presented that offers considerable potential for providing stochastic inputs to Monte Carlo simulation studies. The distribution includes the exponential power family as a special case. An efficient computational strategy is proposed for random variate generation. An example for testing the hypothesis of unit variance illustrates the advantages of the proposed distribution

  4. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  5. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Shabbir, Aqsa

    2016-07-07

    scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling: (i) We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. (ii) Our second application concerns fast, standardized and automated classification of ELM types. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification

  6. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    International Nuclear Information System (INIS)

    Shabbir, Aqsa

    2016-01-01

    scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling: (i) We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. (ii) Our second application concerns fast, standardized and automated classification of ELM types. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification

  7. Universal Distribution of Centers and Saddles in Two-Dimensional Turbulence

    International Nuclear Information System (INIS)

    Rivera, Michael; Wu, Xiao-Lun; Yeung, Chuck

    2001-01-01

    The statistical properties of the local topology of two-dimensional turbulence are investigated using an electromagnetically forced soap film. The local topology of the incompressible 2D flow is characterized by the Jacobian determinant Λ(x,y)=1/4 (ω 2 -σ 2 ) , where ω(x,y) is the local vorticity and σ(x,y) is the local strain rate. For turbulent flows driven by different external force configurations, P(Λ) is found to be a universal function when rescaled using the turbulent intensity. A simple model that agrees with the measured functional form of P(Λ) is constructed using the assumption that the stream function, ψ(x,y) , is a Gaussian random field

  8. Moderator feedback effects in two-dimensional nodal methods for pressurized water reactor analysis

    International Nuclear Information System (INIS)

    Downar, T.J.

    1987-01-01

    A method was developed for incorporating moderator feedback effects in two-dimensional nodal codes used for pressurized water reactor (PWR) neutronic analysis. Equations for the assembly average quality and density are developed in terms of the assembly power calculated in two dimensions. The method is validated with a Westinghouse PWR using the Electric Power Research Institute code SIMULATE-E. Results show a several percent improvement is achieved in the two-dimensional power distribution prediction compared to methods without moderator feedback

  9. Mass distribution for the two-photon channel

    CERN Multimedia

    ATLAS, collaboration

    2012-01-01

    Mass distribution for the two-photon channel. The strongest evidence for this new particle comes from analysis of events containing two photons. The smooth dotted line traces the measured background from known processes. The solid line traces a statistical fit to the signal plus background. The new particle appears as the excess around 126.5 GeV. The full analysis concludes that the probability of such a peak is three chances in a million.

  10. Heat transfer of phase-change materials in two-dimensional cylindrical coordinates

    Science.gov (United States)

    Labdon, M. B.; Guceri, S. I.

    1981-01-01

    Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.

  11. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  12. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  13. On probability-possibility transformations

    Science.gov (United States)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  14. Quantum key distribution session with 16-dimensional photonic states

    Science.gov (United States)

    Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.

    2013-01-01

    The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD. PMID:23897033

  15. Probability distribution of dose rates in the body tissue as a function of the rhytm of Sr90 administration and the age of animals

    International Nuclear Information System (INIS)

    Rasin, I.M.; Sarapul'tsev, I.A.

    1975-01-01

    The probability distribution of tissue radiation doses in the skeleton were studied in experiments on swines and dogs. When introducing Sr-90 into the organism from the day of birth till 90 days dose rate probability distribution is characterized by one, or, for adult animals, by two independent aggregates. Each of these aggregates correspond to the normal distribution law

  16. The Marginal Distributions of a Crossing Time and Renewal Numbers Related with Two Poisson Processes are as Ph-Distributions

    Directory of Open Access Journals (Sweden)

    Mir G. H. Talpur

    2006-01-01

    Full Text Available In this paper we consider, how to find the marginal distributions of crossing time and renewal numbers related with two poisson processes by using probability arguments. The obtained results show that the one-dimension marginal distributions are N+1 order PH-distributions.

  17. On the two-dimensional Saigo-Maeda fractional calculus asociated with two-dimensional Aleph TRANSFORM

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar

    2013-11-01

    Full Text Available This paper deals with the study of two-dimensional Saigo-Maeda operators of Weyl type associated with Aleph function defined in this paper. Two theorems on these defined operators are established. Some interesting results associated with the H-functions and generalized Mittag-Leffler functions are deduced from the derived results. One dimensional analog of the derived results is also obtained.

  18. Path probability distribution of stochastic motion of non dissipative systems: a classical analog of Feynman factor of path integral

    International Nuclear Information System (INIS)

    Lin, T.L.; Wang, R.; Bi, W.P.; El Kaabouchi, A.; Pujos, C.; Calvayrac, F.; Wang, Q.A.

    2013-01-01

    We investigate, by numerical simulation, the path probability of non dissipative mechanical systems undergoing stochastic motion. The aim is to search for the relationship between this probability and the usual mechanical action. The model of simulation is a one-dimensional particle subject to conservative force and Gaussian random displacement. The probability that a sample path between two fixed points is taken is computed from the number of particles moving along this path, an output of the simulation, divided by the total number of particles arriving at the final point. It is found that the path probability decays exponentially with increasing action of the sample paths. The decay rate increases with decreasing randomness. This result supports the existence of a classical analog of the Feynman factor in the path integral formulation of quantum mechanics for Hamiltonian systems

  19. A two-dimensional analytical model of laminar flame in lycopodium dust particles

    Energy Technology Data Exchange (ETDEWEB)

    Rahbari, Alireza [Shahid Rajaee Teacher Training University, Tehran (Iran, Islamic Republic of); Shakibi, Ashkan [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Bidabadi, Mehdi [Combustion Research Laboratory, Narmak, Tehran (Iran, Islamic Republic of)

    2015-09-15

    A two-dimensional analytical model is presented to determine the flame speed and temperature distribution of micro-sized lycopodium dust particles. This model is based on the assumptions that the particle burning rate in the flame front is controlled by the process of oxygen diffusion and the flame structure consists of preheat, reaction and post flame zones. In the first step, the energy conservation equations for fuel-lean condition are expressed in two dimensions, and then these differential equations are solved using the required boundary condition and matching the temperature and heat flux at the interfacial boundaries. Consequently, the obtained flame temperature and flame speed distributions in terms of different particle diameters and equivalence ratio for lean mixture are compared with the corresponding experimental data for lycopodium dust particles. Consequently, it is shown that this two-dimensional model demonstrates better agreement with the experimental results compared to the previous models.

  20. A two-dimensional analytical model of laminar flame in lycopodium dust particles

    International Nuclear Information System (INIS)

    Rahbari, Alireza; Shakibi, Ashkan; Bidabadi, Mehdi

    2015-01-01

    A two-dimensional analytical model is presented to determine the flame speed and temperature distribution of micro-sized lycopodium dust particles. This model is based on the assumptions that the particle burning rate in the flame front is controlled by the process of oxygen diffusion and the flame structure consists of preheat, reaction and post flame zones. In the first step, the energy conservation equations for fuel-lean condition are expressed in two dimensions, and then these differential equations are solved using the required boundary condition and matching the temperature and heat flux at the interfacial boundaries. Consequently, the obtained flame temperature and flame speed distributions in terms of different particle diameters and equivalence ratio for lean mixture are compared with the corresponding experimental data for lycopodium dust particles. Consequently, it is shown that this two-dimensional model demonstrates better agreement with the experimental results compared to the previous models.

  1. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  2. Introduction and application of non-stationary standardized precipitation index considering probability distribution function and return period

    Science.gov (United States)

    Park, Junehyeong; Sung, Jang Hyun; Lim, Yoon-Jin; Kang, Hyun-Suk

    2018-05-01

    The widely used meteorological drought index, the Standardized Precipitation Index (SPI), basically assumes stationarity, but recent changes in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process was proposed. The results were evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered that the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite that these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the probability distribution wider than before. This implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.

  3. Two-dimensional nuclear magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Bax, A.; Lerner, L.

    1986-01-01

    Great spectral simplification can be obtained by spreading the conventional one-dimensional nuclear magnetic resonance (NMR) spectrum in two independent frequency dimensions. This so-called two-dimensional NMR spectroscopy removes spectral overlap, facilitates spectral assignment, and provides a wealth of additional information. For example, conformational information related to interproton distances is available from resonance intensities in certain types of two-dimensional experiments. Another method generates 1 H NMR spectra of a preselected fragment of the molecule, suppressing resonances from other regions and greatly simplifying spectral appearance. Two-dimensional NMR spectroscopy can also be applied to the study of 13 C and 15 N, not only providing valuable connectivity information but also improving sensitivity of 13 C and 15 N detection by up to two orders of magnitude. 45 references, 10 figures

  4. Critical phenomena in quasi-two-dimensional vibrated granular systems.

    Science.gov (United States)

    Guzmán, Marcelo; Soto, Rodrigo

    2018-01-01

    The critical phenomena associated to the liquid-to-solid transition of quasi-two-dimensional vibrated granular systems is studied using molecular dynamics simulations of the inelastic hard sphere model. The critical properties are associated to the fourfold bond-orientational order parameter χ_{4}, which measures the level of square crystallization of the system. Previous experimental results have shown that the transition of χ_{4}, when varying the vibration amplitude, can be either discontinuous or continuous, for two different values of the height of the box. Exploring the amplitude-height phase space, a transition line is found, which can be either discontinuous or continuous, merging at a tricritical point and the continuous branch ends in an upper critical point. In the continuous transition branch, the critical properties are studied. The exponent associated to the amplitude of the order parameter is β=1/2, for various system sizes, in complete agreement with the experimental results. However, the fluctuations of χ_{4} do not show any critical behavior, probably due to crossover effects by the close presence of the tricritical point. Finally, in quasi-one-dimensional systems, the transition is only discontinuous, limited by one critical point, indicating that two is the lower dimension for having a tricritical point.

  5. Spatiotemporal chaos and two-dimensional dissipative rogue waves in Lugiato-Lefever model

    Science.gov (United States)

    Panajotov, Krassimir; Clerc, Marcel G.; Tlidi, Mustapha

    2017-06-01

    Driven nonlinear optical cavities can exhibit complex spatiotemporal dynamics. We consider the paradigmatic Lugiato-Lefever model describing driven nonlinear optical resonator. This model is one of the most-studied nonlinear equations in optics. It describes a large spectrum of nonlinear phenomena from bistability, to periodic patterns, localized structures, self-pulsating localized structures and to a complex spatiotemporal behavior. The model is considered also as prototype model to describe several optical nonlinear devices such as Kerr media, liquid crystals, left handed materials, nonlinear fiber cavity, and frequency comb generation. We focus our analysis on a spatiotemporal chaotic dynamics in one-dimension. We identify a route to spatiotemporal chaos through an extended quasiperiodicity. We have estimated the Kaplan-Yorke dimension that provides a measure of the strange attractor complexity. Likewise, we show that the Lugiato-Leferver equation supports rogues waves in two-dimensional settings. We characterize rogue-wave formation by computing the probability distribution of the pulse height. Contribution to the Topical Issue "Theory and Applications of the Lugiato-Lefever Equation", edited by Yanne K. Chembo, Damia Gomila, Mustapha Tlidi, Curtis R. Menyuk.

  6. GEPOIS: a two dimensional nonuniform mesh Poisson solver

    International Nuclear Information System (INIS)

    Quintenz, J.P.; Freeman, J.R.

    1979-06-01

    A computer code is described which solves Poisson's equation for the electric potential over a two dimensional cylindrical (r,z) nonuniform mesh which can contain internal electrodes. Poisson's equation is solved over a given region subject to a specified charge distribution with either Neumann or Dirichlet perimeter boundary conditions and with Dirichlet boundary conditions on internal surfaces. The static electric field is also computed over the region with special care given to normal electric field components at boundary surfaces

  7. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  8. On some classes of two-dimensional local models in discrete two-dimensional monatomic FPU lattice with cubic and quartic potential

    International Nuclear Information System (INIS)

    Quan, Xu; Qiang, Tian

    2009-01-01

    This paper discusses the two-dimensional discrete monatomic Fermi–Pasta–Ulam lattice, by using the method of multiple-scale and the quasi-discreteness approach. By taking into account the interaction between the atoms in the lattice and their nearest neighbours, it obtains some classes of two-dimensional local models as follows: two-dimensional bright and dark discrete soliton trains, two-dimensional bright and dark line discrete breathers, and two-dimensional bright and dark discrete breather. (condensed matter: structure, thermal and mechanical properties)

  9. Two-dimensional models

    International Nuclear Information System (INIS)

    Schroer, Bert; Freie Universitaet, Berlin

    2005-02-01

    It is not possible to compactly review the overwhelming literature on two-dimensional models in a meaningful way without a specific viewpoint; I have therefore tacitly added to the above title the words 'as theoretical laboratories for general quantum field theory'. I dedicate this contribution to the memory of J. A. Swieca with whom I have shared the passion of exploring 2-dimensional models for almost one decade. A shortened version of this article is intended as a contribution to the project 'Encyclopedia of mathematical physics' and comments, suggestions and critical remarks are welcome. (author)

  10. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  11. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  12. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  13. Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence

    Directory of Open Access Journals (Sweden)

    C. C. Wu

    2011-04-01

    Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.

  14. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  15. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  16. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  17. Two-dimensional multifractal cross-correlation analysis

    International Nuclear Information System (INIS)

    Xi, Caiping; Zhang, Shuning; Xiong, Gang; Zhao, Huichang; Yang, Yonghong

    2017-01-01

    Highlights: • We study the mathematical models of 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Present the definition of the two-dimensional N 2 -partitioned multiplicative cascading process. • Do the comparative analysis of 2D-MC by 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Provide a reference on the choice and parameter settings of these methods in practice. - Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. This paper presents two-dimensional multifractal cross-correlation analysis based on the partition function (2D-MFXPF), two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) and two-dimensional multifractal cross-correlation analysis based on the detrended moving average analysis (2D-MFXDMA). We apply these methods to pairs of two-dimensional multiplicative cascades (2D-MC) to do a comparative study. Then, we apply the two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) to real images and unveil intriguing multifractality in the cross correlations of the material structures. At last, we give the main conclusions and provide a valuable reference on how to choose the multifractal algorithms in the potential applications in the field of SAR image classification and detection.

  18. Numerical simulation of aerodynamic sound radiated from a two-dimensional airfoil

    OpenAIRE

    飯田, 明由; 大田黒, 俊夫; 加藤, 千幸; Akiyoshi, Iida; Toshio, Otaguro; Chisachi, Kato; 日立機研; 日立機研; 東大生研; Mechanical Engineering Research Laboratory, Hitachi Ltd.; Mechanical Engineering Research Laboratory, Hitachi Ltd.; University of Tokyo

    2000-01-01

    An aerodynamic sound radiated from a two-dimensional airfoil has been computed with the Lighthill-Curle's theory. The predicted sound pressure level is agreement with the measured one. Distribution of vortex sound sources is also estimated based on the correlation between the unsteady vorticity fluctuations and the aerodynamic sound. The distribution of vortex sound source reveals that separated shear layers generate aerodynamic sound. This result is help to understand noise reduction method....

  19. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  20. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  1. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  2. Qubit-qutrit separability-probability ratios

    International Nuclear Information System (INIS)

    Slater, Paul B.

    2005-01-01

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N 2 -1)-dimensional volume and (N 2 -2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10 9 well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase. Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases

  3. The joint probability distribution of structure factors incorporating anomalous-scattering and isomorphous-replacement data

    International Nuclear Information System (INIS)

    Peschar, R.; Schenk, H.

    1991-01-01

    A method to derive joint probability distributions of structure factors is presented which incorporates anomalous-scattering and isomorphous-replacement data in a unified procedure. The structure factors F H and F -H , whose magnitudes are different due to anomalous scattering, are shown to be isomorphously related. This leads to a definition of isomorphism by means of which isomorphous-replacement and anomalous-scattering data can be handled simultaneously. The definition and calculation of the general term of the joint probability distribution for isomorphous structure factors turns out to be crucial. Its analytical form leads to an algorithm by means of which any particular joint probability distribution of structure factors can be constructed. The calculation of the general term is discussed for the case of four isomorphous structure factors in P1, assuming the atoms to be independently and uniformly distributed. A main result is the construction of the probability distribution of the 64 triplet phase sums present in space group P1 amongst four isomorphous structure factors F H , four isomorphous F K and four isomorphous F -H-K . The procedure is readily generalized in the case where an arbitrary number of isomorphous structure factors are available for F H , F K and F -H-K . (orig.)

  4. FPGA Implementation of one-dimensional and two-dimensional cellular automata

    International Nuclear Information System (INIS)

    D'Antone, I.

    1999-01-01

    This report describes the hardware implementation of one-dimensional and two-dimensional cellular automata (CAs). After a general introduction to the cellular automata, we consider a one-dimensional CA used to implement pseudo-random techniques in built-in self test for VLSI. Due to the increase in digital ASIC complexity, testing is becoming one of the major costs in the VLSI production. The high electronics complexity, used in particle physics experiments, demands higher reliability than in the past time. General criterions are given to evaluate the feasibility of the circuit used for testing and some quantitative parameters are underlined to optimize the architecture of the cellular automaton. Furthermore, we propose a two-dimensional CA that performs a peak finding algorithm in a matrix of cells mapping a sub-region of a calorimeter. As in a two-dimensional filtering process, the peaks of the energy clusters are found in one evolution step. This CA belongs to Wolfram class II cellular automata. Some quantitative parameters are given to optimize the architecture of the cellular automaton implemented in a commercial field programmable gate array (FPGA)

  5. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    International Nuclear Information System (INIS)

    Marshman, Emily; Singh, Chandralekha

    2017-01-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations. (paper)

  6. Lie algebra contractions on two-dimensional hyperboloid

    International Nuclear Information System (INIS)

    Pogosyan, G. S.; Yakhno, A.

    2010-01-01

    The Inoenue-Wigner contraction from the SO(2, 1) group to the Euclidean E(2) and E(1, 1) group is used to relate the separation of variables in Laplace-Beltrami (Helmholtz) equations for the four corresponding two-dimensional homogeneous spaces: two-dimensional hyperboloids and two-dimensional Euclidean and pseudo-Euclidean spaces. We show how the nine systems of coordinates on the two-dimensional hyperboloids contracted to the four systems of coordinates on E 2 and eight on E 1,1 . The text was submitted by the authors in English.

  7. Quasi-two-dimensional holography

    International Nuclear Information System (INIS)

    Kutzner, J.; Erhard, A.; Wuestenberg, H.; Zimpfer, J.

    1980-01-01

    The acoustical holography with numerical reconstruction by area scanning is memory- and time-intensive. With the experiences by the linear holography we tried to derive a scanning for the evaluating of the two-dimensional flaw-sizes. In most practical cases it is sufficient to determine the exact depth extension of a flaw, whereas the accuracy of the length extension is less critical. For this reason the applicability of the so-called quasi-two-dimensional holography is appropriate. The used sound field given by special probes is divergent in the inclined plane and light focussed in the perpendicular plane using cylindrical lenses. (orig.) [de

  8. Dimensional Effects on the Momentum distribution of Bosonic Trimer States

    DEFF Research Database (Denmark)

    F. Bellotti, F.; Frederico, T.; T. Yamashita, M.

    2013-01-01

    -body contact parameter is universal and then demonstrate that the momentum distribution at next-to-leading order has a logarithmic dependence on momentum which is vastly different from the three-dimensional case. Based on this, we propose a scheme for measuring the effective dimensionality of a quantum many......-body system by exploiting the functional form of the momentum distribution....

  9. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  10. Details of 1π sr wide acceptance angle electrostatic lens for electron energy and two-dimensional angular distribution analysis combined with real space imaging

    International Nuclear Information System (INIS)

    Tóth, László; Matsuda, Hiroyuki; Matsui, Fumihiko; Goto, Kentaro; Daimon, Hiroshi

    2012-01-01

    We propose a new 1π sr Wide Acceptance Angle Electrostatic Lens (WAAEL), which works as a photoemission electron microscope (PEEM), a highly sensitive display-type electron energy and two-dimensional angular distribution analyzer. It can display two-dimensional angular distributions of charged particles within the acceptance angle of ±60° that is much larger than the largest acceptance angle range so far and comparable to the display-type spherical mirror analyzer developed by Daimon et al. . It has good focusing capabilities with 5-times magnification and 27(4) μm lateral-resolution. The relative energy resolution is typically from 2 to 5×10 -3 depending on the diameter of energy aperture and the emission area on the sample. Although, the lateral resolution of the presented lens is far from those are available nowadays, but this is the first working model that can form images using charged particles collected from 1π sr wide acceptance angle. The realization of such lens system is one of the first possible steps towards reaching the field of imaging type atomic resolution electron microscopy Feynman et al. Here some preliminary results are shown.

  11. Topology optimization of two-dimensional waveguides

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2003-01-01

    In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss.......In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss....

  12. Traditional Semiconductors in the Two-Dimensional Limit.

    Science.gov (United States)

    Lucking, Michael C; Xie, Weiyu; Choe, Duk-Hyun; West, Damien; Lu, Toh-Ming; Zhang, S B

    2018-02-23

    Interest in two-dimensional materials has exploded in recent years. Not only are they studied due to their novel electronic properties, such as the emergent Dirac fermion in graphene, but also as a new paradigm in which stacking layers of distinct two-dimensional materials may enable different functionality or devices. Here, through first-principles theory, we reveal a large new class of two-dimensional materials which are derived from traditional III-V, II-VI, and I-VII semiconductors. It is found that in the ultrathin limit the great majority of traditional binary semiconductors studied (a series of 28 semiconductors) are not only kinetically stable in a two-dimensional double layer honeycomb structure, but more energetically stable than the truncated wurtzite or zinc-blende structures associated with three dimensional bulk. These findings both greatly increase the landscape of two-dimensional materials and also demonstrate that in the double layer honeycomb form, even ordinary semiconductors, such as GaAs, can exhibit exotic topological properties.

  13. Sufficient Controllability Condition for Affine Systems with Two-Dimensional Control and Two-Dimensional Zero Dynamics

    Directory of Open Access Journals (Sweden)

    D. A. Fetisov

    2015-01-01

    Full Text Available The controllability conditions are well known if we speak about linear stationary systems: a linear stationary system is controllable if and only if the dimension of the state vector is equal to the rank of the controllability matrix. The concept of the controllability matrix is extended to affine systems, but relations between affine systems controllability and properties of this matrix are more complicated. Various controllability conditions are set for affine systems, but they deal as usual either with systems of some special form or with controllability in some small neighborhood of the concerned point. An affine system is known to be controllable if the system is equivalent to a system of a canonical form, which is defined and regular in the whole space of states. In this case, the system is said to be feedback linearizable in the space of states. However there are examples, which illustrate that a system can be controllable even if it is not feedback linearizable in any open subset in the space of states. In this article we deal with such systems.Affine systems with two-dimensional control are considered. The system in question is assumed to be equivalent to a system of a quasicanonical form with two-dimensional zero dynamics which is defined and regular in the whole space of states. Therefore the controllability of the original system is equivalent to the controllability of the received system of a quasicanonical form. In this article the sufficient condition for an available solution of the terminal problem is proven for systems of a quasicanonical form with two-dimensional control and two-dimensional zero dynamics. The condition is valid in the case of an arbitrary time interval and arbitrary initial and finite states of the system. Therefore the controllability condition is set for systems of a quasicanonical form with two-dimensional control and two-dimensional zero dynamics. An example is given which illustrates how the proved

  14. Computer program determines exact two-sided tolerance limits for normal distributions

    Science.gov (United States)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  15. Multimode Interference: Identifying Channels and Ridges in Quantum Probability Distributions

    OpenAIRE

    O'Connell, Ross C.; Loinaz, Will

    2004-01-01

    The multimode interference technique is a simple way to study the interference patterns found in many quantum probability distributions. We demonstrate that this analysis not only explains the existence of so-called "quantum carpets," but can explain the spatial distribution of channels and ridges in the carpets. With an understanding of the factors that govern these channels and ridges we have a limited ability to produce a particular pattern of channels and ridges by carefully choosing the ...

  16. The Wigner distribution function for the one-dimensional parabose oscillator

    International Nuclear Information System (INIS)

    Jafarov, E; Lievens, S; Jeugt, J Van der

    2008-01-01

    In the beginning of the 1950s, Wigner introduced a fundamental deformation from the canonical quantum mechanical harmonic oscillator, which is nowadays sometimes called a Wigner quantum oscillator or a parabose oscillator. Also, in quantum mechanics the so-called Wigner distribution is considered to be the closest quantum analogue of the classical probability distribution over the phase space. In this paper, we consider which definition for such a distribution function could be used in the case of non-canonical quantum mechanics. We then explicitly compute two different expressions for this distribution function for the case of the parabose oscillator. Both expressions turn out to be multiple sums involving (generalized) Laguerre polynomials. Plots then show that the Wigner distribution function for the ground state of the parabose oscillator is similar in behaviour to the Wigner distribution function of the first excited state of the canonical quantum oscillator

  17. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  18. Evolution of two-dimensional soap froth with a single defect

    International Nuclear Information System (INIS)

    Levitan, B.

    1994-01-01

    The temporal evolution of two-dimensional soap froth, starting from a particle initial state, is studied. The initial state is a hexagonal array of bubbles in which a single defect is introduced. A cluster of transformed bubbles grows; the time dependence of the number of bubbles in this cluster in investigated and the distribution of the topological classes in the evolving part of the system is calculated. The distribution appears to approach a fixed limiting one, which differs from that obtained for the usual scaling state of the froth

  19. Three-dimensional distribution of cortical synapses: a replicated point pattern-based analysis

    Science.gov (United States)

    Anton-Sanchez, Laura; Bielza, Concha; Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers. PMID:25206325

  20. COBRA/TRAC analysis of two-dimensional thermal-hydraulic behavior in SCTF reflood tests

    International Nuclear Information System (INIS)

    Iwamura, Takamichi; Ohnuki, Akira; Sobajima, Makoto; Adachi, Hiromichi

    1987-01-01

    The effects of radial power distribution and non-uniform upper plenum water accumulation on thermal-hydraulic behavior in the core were observed in the reflood tests with Slab Core Test Facility (SCTF). In order to examine the predictability of these two effects by a multi-dimensional analysis code, the COBRA/TRAC calculations were made. The calculated results indicated that the heat transfer enhancement in high power bundles above quench front was caused by high vapor flow rate in those bundles due to the radial power distribution. On the other hand, the heat transfer degradation in the peripheral bundles under the condition of non-uniform upper plenum water accumulation was caused by the lower flow rates of vapor and entrained liquid above the quench front in those bundles by the reason that vapor concentrated in the center bundles due to the cross flow induced by the horizontal pressure gradient in the core. The above-mentioned two-dimensional heat transfer behaviors calculated with the COBRA/TRAC code is similar to those observed in SCTF tests and therefore those calculations are useful to investigate the mechanism of the two-dimensional effects in SCTF reflood tests. (author)

  1. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  2. Application of fast neutron radiography to three-dimensional visualization of steady two-phase flow in a rod bundle

    CERN Document Server

    Takenaka, N; Fujii, T; Mizubata, M; Yoshii, K

    1999-01-01

    Three-dimensional void fraction distribution of air-water two-phase flow in a 4x4 rod-bundle near a spacer was visualized by fast neutron radiography using a CT method. One-dimensional cross sectional averaged void fraction distribution was also calculated. The behaviors of low void fraction (thick water) two-phase flow in the rod bundle around the spacer were clearly visualized. It was shown that the void fraction distributions were visualized with a quality similar to those by thermal neutron radiography for low void fraction two-phase flow which is difficult to visualize by thermal neutron radiography. It is concluded that the fast neutron radiography is efficiently applicable to two-phase flow studies.

  3. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  4. Two-dimensional flexible nanoelectronics

    Science.gov (United States)

    Akinwande, Deji; Petrone, Nicholas; Hone, James

    2014-12-01

    2014/2015 represents the tenth anniversary of modern graphene research. Over this decade, graphene has proven to be attractive for thin-film transistors owing to its remarkable electronic, optical, mechanical and thermal properties. Even its major drawback--zero bandgap--has resulted in something positive: a resurgence of interest in two-dimensional semiconductors, such as dichalcogenides and buckled nanomaterials with sizeable bandgaps. With the discovery of hexagonal boron nitride as an ideal dielectric, the materials are now in place to advance integrated flexible nanoelectronics, which uniquely take advantage of the unmatched portfolio of properties of two-dimensional crystals, beyond the capability of conventional thin films for ubiquitous flexible systems.

  5. One-dimensional quantum walk with a moving boundary

    International Nuclear Information System (INIS)

    Kwek, Leong Chuan; Setiawan

    2011-01-01

    Quantum walks are interesting models with potential applications to quantum algorithms and physical processes such as photosynthesis. In this paper, we study two models of one-dimensional quantum walks, namely, quantum walks with a moving absorbing wall and quantum walks with one stationary and one moving absorbing wall. For the former, we calculate numerically the survival probability, the rate of change of average position, and the rate of change of standard deviation of the particle's position in the long time limit for different wall velocities. Moreover, we also study the asymptotic behavior and the dependence of the survival probability on the initial particle's state. While for the latter, we compute the absorption probability of the right stationary wall for different velocities and initial positions of the left wall boundary. The results for these two models are compared with those obtained for the classical model. The difference between the results obtained for the quantum and classical models can be attributed to the difference in the probability distributions.

  6. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  7. The p-sphere and the geometric substratum of power-law probability distributions

    International Nuclear Information System (INIS)

    Vignat, C.; Plastino, A.

    2005-01-01

    Links between power law probability distributions and marginal distributions of uniform laws on p-spheres in R n show that a mathematical derivation of the Boltzmann-Gibbs distribution necessarily passes through power law ones. Results are also given that link parameters p and n to the value of the non-extensivity parameter q that characterizes these power laws in the context of non-extensive statistics

  8. Two-dimensional Tissue Image Reconstruction Based on Magnetic Field Data

    Directory of Open Access Journals (Sweden)

    J. Dedkova

    2012-09-01

    Full Text Available This paper introduces new possibilities within two-dimensional reconstruction of internal conductivity distribution. In addition to the electric field inside the given object, the injected current causes a magnetic field which can be measured either outside the object by means of a Hall probe or inside the object through magnetic resonance imaging. The Magnetic Resonance method, together with Electrical impedance tomography (MREIT, is well known as a bio-imaging modality providing cross-sectional conductivity images with a good spatial resolution from the measurements of internal magnetic flux density produced by externally injected currents. A new algorithm for the conductivity reconstruction, which utilizes the internal current information with respect to corresponding boundary conditions and the external magnetic field, was developed. A series of computer simulations has been conducted to assess the performance of the proposed algorithm within the process of estimating electrical conductivity changes in the lungs, heart, and brain tissues captured in two-dimensional piecewise homogeneous chest and head models. The reconstructed conductivity distribution using the proposed method is compared with that using a conventional method based on Electrical Impedance Tomography (EIT. The acquired experience is discussed and the direction of further research is proposed.

  9. Calculation of multi-dimensional dose distribution in medium due to proton beam incidence

    International Nuclear Information System (INIS)

    Kawachi, Kiyomitsu; Inada, Tetsuo

    1978-01-01

    The method of analyzing the multi-dimensional dose distribution in a medium due to proton beam incidence is presented to obtain the reliable and simplified method from clinical viewpoint, especially for the medical treatment of cancer. The heavy ion beam being taken out of an accelerator has to be adjusted to fit cancer location and size, utilizing a modified range modulator, a ridge filter, a bolus and a special scanning apparatus. The precise calculation of multi-dimensional dose distribution of proton beam is needed to fit treatment to a limit part. The analytical formulas consist of those for the fluence distribution in a medium, the divergence of flying range, the energy distribution itself, the dose distribution in side direction and the two-dimensional dose distribution. The fluence distribution in polystyrene in case of the protons with incident energy of 40 and 60 MeV, the energy distribution of protons at the position of a Bragg peak for various values of incident energy, the depth dose distribution in polystyrene in case of the protons with incident energy of 40 and 60 MeV and average energy of 100 MeV, the proton fluence and dose distribution as functions of depth for the incident average energy of 250 MeV, the statistically estimated percentage errors in the proton fluence and dose distribution, the estimated minimum detectable tumor thickness as a function of the number of incident protons for the different incident spectra with average energy of 250 MeV, the isodose distribution in a plane containing the central axis in case of the incident proton beam of 3 mm diameter and 40 MeV and so on are presented as the analytical results, and they are evaluated. (Nakai, Y.)

  10. Numerical evidence for two types of localized states in a two-dimensional disordered lattice

    International Nuclear Information System (INIS)

    Tit, N.; Kumar, N.

    1992-06-01

    We report results of our numerical calculations, based on the equation of motion method, of dc-electrical conductivity and of density of states up to 40x40 two-dimensional square lattices modelling a right-binding Hamiltonian for a binary (AB) compound, disordered by randomly distributed B vacancies up to 10%. Our results indicate strongly localized states away from band centers separated from the relatively weakly localized states toward midband. This is in qualitative agreement with the idea of a ''mobility edge'' separating exponentially localized states from the power-law localized states as suggested by the two-parameter scaling theory of Kaevh in two dimensions. (author). 7 refs, 4 figs

  11. Two-dimensional atom localization via probe absorption in a four-level atomic system

    International Nuclear Information System (INIS)

    Wang Zhi-Ping; Ge Qiang; Ruan Yu-Hua; Yu Ben-Li

    2013-01-01

    We have investigated the two-dimensional (2D) atom localization via probe absorption in a coherently driven four-level atomic system by means of a radio-frequency field driving a hyperfine transition. It is found that the detecting probability and precision of 2D atom localization can be significantly improved via adjusting the system parameters. As a result, our scheme may be helpful in laser cooling or the atom nano-lithography via atom localization

  12. Long-lived trimers in a quasi-two-dimensional Fermi system

    Science.gov (United States)

    Laird, Emma K.; Kirk, Thomas; Parish, Meera M.; Levinsen, Jesper

    2018-04-01

    We consider the problem of three distinguishable fermions confined to a quasi-two-dimensional (quasi-2D) geometry, where there is a strong harmonic potential in one direction. We go beyond previous theoretical work and investigate the three-body bound states (trimers) for the case where the two-body short-range interactions between fermions are unequal. Using the scattering parameters from experiments on ultracold 6Li atoms, we calculate the trimer spectrum throughout the crossover from two to three dimensions. We find that the deepest Efimov trimer in the 6Li system is unaffected by realistic quasi-2D confinements, while the first excited trimer smoothly evolves from a three-dimensional-like Efimov trimer to an extended 2D-like trimer as the attractive interactions are decreased. We furthermore compute the excited trimer wave function and quantify the stability of the trimer against decay into a dimer and an atom by determining the probability that three fermions approach each other at short distances. Our results indicate that the lifetime of the trimer can be enhanced by at least an order of magnitude in the quasi-2D geometry, thus opening the door to realizing long-lived trimers in three-component Fermi gases.

  13. Application of Gaussian cubature to model two-dimensional population balances

    Directory of Open Access Journals (Sweden)

    Bałdyga Jerzy

    2017-09-01

    Full Text Available In many systems of engineering interest the moment transformation of population balance is applied. One of the methods to solve the transformed population balance equations is the quadrature method of moments. It is based on the approximation of the density function in the source term by the Gaussian quadrature so that it preserves the moments of the original distribution. In this work we propose another method to be applied to the multivariate population problem in chemical engineering, namely a Gaussian cubature (GC technique that applies linear programming for the approximation of the multivariate distribution. Examples of the application of the Gaussian cubature (GC are presented for four processes typical for chemical engineering applications. The first and second ones are devoted to crystallization modeling with direction-dependent two-dimensional and three-dimensional growth rates, the third one represents drop dispersion accompanied by mass transfer in liquid-liquid dispersions and finally the fourth case regards the aggregation and sintering of particle populations.

  14. Biomedical applications of two- and three-dimensional deterministic radiation transport methods

    International Nuclear Information System (INIS)

    Nigg, D.W.

    1992-01-01

    Multidimensional deterministic radiation transport methods are routinely used in support of the Boron Neutron Capture Therapy (BNCT) Program at the Idaho National Engineering Laboratory (INEL). Typical applications of two-dimensional discrete-ordinates methods include neutron filter design, as well as phantom dosimetry. The epithermal-neutron filter for BNCT that is currently available at the Brookhaven Medical Research Reactor (BMRR) was designed using such methods. Good agreement between calculated and measured neutron fluxes was observed for this filter. Three-dimensional discrete-ordinates calculations are used routinely for dose-distribution calculations in three-dimensional phantoms placed in the BMRR beam, as well as for treatment planning verification for live canine subjects. Again, good agreement between calculated and measured neutron fluxes and dose levels is obtained

  15. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  16. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  17. Two-dimensional shielding benchmarks for iron at YAYOI, (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; An, Shigehiro; Kasai, Shigeru; Miyasaka, Shun-ichi; Koyama, Kinji.

    The aim of this work is to assess the collapsed neutron and gamma multigroup cross sections for two dimensional discrete ordinate transport code. Two dimensional distributions of neutron flux and gamma ray dose through a 70cm thick and 94cm square iron shield were measured at the fast neutron source reactor ''YAYOI''. The iron shield was placed over the lead reflector in the vertical experimental column surrounded by heavy concrete wall. The detectors used in this experiment were threshold detectors In, Ni, Al, Mg, Fe and Zn, sandwitch resonance detectors Au, W and Co, activation foils Au for neutrons and thermoluminescence detectors for gamma ray dose. The experimental results were compared with the calculated ones by the discrete ordinate transport code ANISN and TWOTRAN. The region-wise, coupled neutron-gamma multigroup cross-sections (100n+20gamma, EURLIB structure) were generated from ENDF/B-IV library for neutrons and POPOP4 library for gamma-ray production cross-sections by using the code system RADHEAT. The effective microscopic neutron cross sections were obtained from the infinite dilution values applying ABBN type self-shielding factors. The gamma ray production multigroup cross-sections were calculated from these effective microscopic neutron cross-sections. For two-dimensional calculations the group constants were collapsed into 10 neutron groups and 3 gamma groups by using ANISN. (auth.)

  18. Two-dimensional distribution of electron temperature in ergodic layer of LHD measured from line intensity ratio of CIV and NeVIII

    International Nuclear Information System (INIS)

    Wang, Erhui; Morita, Shigeru; Goto, Motoshi; Murakami, Izumi; Oishi, Tetsutarou; Dong, Chunfeng

    2013-01-01

    Two-dimensional distribution of impurity lines emitted from ergodic layer with stochastic magnetic field lines in Large Helical Device (LHD) has been observed using a space-resolved extreme ultraviolet (EUV) spectrometer. The two-dimensional electron temperature distribution in the ergodic layer is successfully measured using the line intensity ratio of Li-like NeVIII 2s-3p ( 2 S 1/2 - 2 P 3/2 : 88.09 Å, 2 S 1/2 - 2 P 1/2 : 88.13 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 102.91 Å, 2 P 3/2 - 2 S 1/2 : 103.09 Å) transitions emitted from radial location near Last Closed Flux Surface (LCFS). The intensity ratio analyzed with ADAS code shows no dependence on the electron density below 10 14 cm -3 . The result indicates a little higher temperature, i.e., 220 eV, in the poloidal location at high-field side near helical coils called O-point compared to the temperature near X-point, i.e., 170 eV. The electron temperature profile is also measured at the edge boundary of ergodic layer using the line intensity ratio of Li-like CIV 2p-3d ( 2 P 1/2 - 2 D 3/2 : 384.03 Å, 2 P 3/2 - 2 D 5/2 : 384.18 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 419.53 Å, 2 P 3/2 - 2 S 1/2 : 419.71 Å) transitions. The intensity ratios analyzed with CHIANTI, ADAS and T.Kawachi codes show a slightly higher temperature near O-point, i.e., 25 eV for CHIANTI, 21 eV for ADAS and 11 eV for T.Kawachi's codes, compared to the temperature at X-point, i.e., 15 - 21 eV for CHIANTI, 9 - 15 eV for ADAS and 6 - 9 eV for T.Kawachi codes. It suggests that the transport coefficient in the ergodic layer is varied with three-dimensional structure. (author)

  19. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  20. Beginning Introductory Physics with Two-Dimensional Motion

    Science.gov (United States)

    Huggins, Elisha

    2009-01-01

    During the session on "Introductory College Physics Textbooks" at the 2007 Summer Meeting of the AAPT, there was a brief discussion about whether introductory physics should begin with one-dimensional motion or two-dimensional motion. Here we present the case that by starting with two-dimensional motion, we are able to introduce a considerable…

  1. Two-dimensional thermofield bosonization

    International Nuclear Information System (INIS)

    Amaral, R.L.P.G.; Belvedere, L.V.; Rothe, K.D.

    2005-01-01

    The main objective of this paper was to obtain an operator realization for the bosonization of fermions in 1 + 1 dimensions, at finite, non-zero temperature T. This is achieved in the framework of the real-time formalism of Thermofield Dynamics. Formally, the results parallel those of the T = 0 case. The well-known two-dimensional Fermion-Boson correspondences at zero temperature are shown to hold also at finite temperature. To emphasize the usefulness of the operator realization for handling a large class of two-dimensional quantum field-theoretic problems, we contrast this global approach with the cumbersome calculation of the fermion-current two-point function in the imaginary-time formalism and real-time formalisms. The calculations also illustrate the very different ways in which the transmutation from Fermi-Dirac to Bose-Einstein statistics is realized

  2. Two-dimensional x-ray diffraction

    CERN Document Server

    He, Bob B

    2009-01-01

    Written by one of the pioneers of 2D X-Ray Diffraction, this useful guide covers the fundamentals, experimental methods and applications of two-dimensional x-ray diffraction, including geometry convention, x-ray source and optics, two-dimensional detectors, diffraction data interpretation, and configurations for various applications, such as phase identification, texture, stress, microstructure analysis, crystallinity, thin film analysis and combinatorial screening. Experimental examples in materials research, pharmaceuticals, and forensics are also given. This presents a key resource to resea

  3. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit

    2014-07-28

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  4. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2014-01-01

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  5. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  6. Development of self-propelled measuring system for 2-dimensional distribution of radiation beam using plastic scintillation fibers

    International Nuclear Information System (INIS)

    Matsumura, Shuji; Kitahara, Sigeo; Yamanishi, Akio; Nose, Hiroyuki; Tisaka, Osamu

    2013-01-01

    Conventional 2-dimensional distribution of radiation beam is usually estimated from dose rates on a lot of dispersed spots, which has two problems. One is that it takes much time to measure distribution in a large area, and another problem is it is difficult to detect a localized hot spot from dispersed measurement results. To solve these problems we have developed a self-propelled measuring system adopting plastic scintillation fibers (PSF) as a detector. Estimating dose distribution in PSF and scanning PSF with self-propelled system give a 2-dimensional distribution of radiation beam in shorter measuring time and better spatial resolution than usual. A global positioning system was also installed to our system to know the absolute position of interest. With this system we have verified that we can estimate the 2-dimensional distribution in area of 2,000 m 2 in an hour. This report describes the overview of our newly developed system. (author)

  7. Mixing times in quantum walks on two-dimensional grids

    International Nuclear Information System (INIS)

    Marquezino, F. L.; Portugal, R.; Abal, G.

    2010-01-01

    Mixing properties of discrete-time quantum walks on two-dimensional grids with toruslike boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and the running time of the corresponding abstract search algorithm is discussed.

  8. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  9. Piezoelectricity in Two-Dimensional Materials

    KAUST Repository

    Wu, Tao

    2015-02-25

    Powering up 2D materials: Recent experimental studies confirmed the existence of piezoelectricity - the conversion of mechanical stress into electricity - in two-dimensional single-layer MoS2 nanosheets. The results represent a milestone towards embedding low-dimensional materials into future disruptive technologies. © 2015 Wiley-VCH Verlag GmbH & Co. KGaA.

  10. Elastic wave localization in two-dimensional phononic crystals with one-dimensional random disorder and aperiodicity

    International Nuclear Information System (INIS)

    Yan Zhizhong; Zhang Chuanzeng; Wang Yuesheng

    2011-01-01

    The band structures of in-plane elastic waves propagating in two-dimensional phononic crystals with one-dimensional random disorder and aperiodicity are analyzed in this paper. The localization of wave propagation is discussed by introducing the concept of the localization factor, which is calculated by the plane-wave-based transfer-matrix method. By treating the random disorder and aperiodicity as the deviation from the periodicity in a special way, three kinds of aperiodic phononic crystals that have normally distributed random disorder, Thue-Morse and Rudin-Shapiro sequence in one direction and translational symmetry in the other direction are considered and the band structures are characterized using localization factors. Besides, as a special case, we analyze the band gap properties of a periodic planar layered composite containing a periodic array of square inclusions. The transmission coefficients based on eigen-mode matching theory are also calculated and the results show the same behaviors as the localization factor does. In the case of random disorders, the localization degree of the normally distributed random disorder is larger than that of the uniformly distributed random disorder although the eigenstates are both localized no matter what types of random disorders, whereas, for the case of Thue-Morse and Rudin-Shapiro structures, the band structures of Thue-Morse sequence exhibit similarities with the quasi-periodic (Fibonacci) sequence not present in the results of the Rudin-Shapiro sequence.

  11. Two-dimensional analytical solution for nodal calculation of nuclear reactors

    International Nuclear Information System (INIS)

    Silva, Adilson C.; Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2017-01-01

    Highlights: • A proposal for a coarse mesh nodal method is presented. • The proposal uses the analytical solution of the two-dimensional neutrons diffusion equation. • The solution is performed homogeneous nodes with dimensions of the fuel assembly. • The solution uses four average fluxes on the node surfaces as boundary conditions. • The results show good accuracy and efficiency. - Abstract: In this paper, the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G). The spatial domain of reactor core is divided into a set of nodes with uniform nuclear parameters. To determine iteratively the multiplication factor and the neutron flux in the reactor we combine the analytical solution of the neutron diffusion equation with an iterative method known as power method. The analytical solution for different types of regions that compose the reactor is obtained, such as fuel and reflector regions. Four average fluxes in the node surfaces are used as boundary conditions for analytical solution. Discontinuity factors on the node surfaces derived from the homogenization process are applied to maintain averages reaction rates and the net current in the fuel assembly (FA). To validate the results obtained by the analytical solution a relative power density distribution in the FAs is determined from the neutron flux distribution and compared with the reference values. The results show good accuracy and efficiency.

  12. Two-dimensional confinement of heavy fermions

    International Nuclear Information System (INIS)

    Shishido, Hiroaki; Shibauchi, Takasada; Matsuda, Yuji; Terashima, Takahito

    2010-01-01

    Metallic systems with the strongest electron correlations are realized in certain rare-earth and actinide compounds whose physics are dominated by f-electrons. These materials are known as heavy fermions, so called because the effective mass of the conduction electrons is enhanced via correlation effects up to as much as several hundreds times the free electron mass. To date the electronic structure of all heavy-fermion compounds is essentially three-dimensional. Here we report on the first realization of a two-dimensional heavy-fermion system, where the dimensionality is adjusted in a controllable fashion by fabricating heterostructures using molecular beam epitaxy. The two-dimensional heavy fermion system displays striking deviations from the standard Fermi liquid low-temperature electronic properties. (author)

  13. Two-dimensional inverse planning and delivery with a preclinical image guided microirradiator

    International Nuclear Information System (INIS)

    Stewart, James M. P.; Lindsay, Patricia E.; Jaffray, David A.

    2013-01-01

    Purpose: Recent advances in preclinical radiotherapy systems have provided the foundation for scaling many of the elements of clinical radiation therapy practice to the dimensions and energy demanded in small animal studies. Such systems support the technical capabilities to accurately deliver highly complex dose distributions, but methods to optimize and deliver such distributions remain in their infancy. This study developed an optimization method based on empirically measured two-dimensional dose kernel measurements to deliver arbitrary planar dose distributions on a recently developed small animal radiotherapy platform.Methods: A two-dimensional dose kernel was measured with repeated radiochromic film measurements for the circular 1 mm diameter fixed collimator of the small animal radiotherapy system at 1 cm depth in a solid water phantom. This kernel was utilized in a sequential quadratic programming optimization framework to determine optimal beam positions and weights to deliver an arbitrary desired dose distribution. The positions and weights were then translated to a set of stage motions to automatically deliver the optimized dose distribution. End-to-end efficacy of the framework was quantified through five repeated deliveries of two dosimetric challenges: (1) a 5 mm radius bullseye distribution, and (2) a “sock” distribution contained within a 9 × 13 mm bounding box incorporating rectangular, semicircular, and exponentially decaying geometric constructs and a rectangular linear dose gradient region. These two challenges were designed to gauge targeting, geometric, and dosimetric fidelity.Results: Optimization of the bullseye and sock distributions required 2.1 and 5.9 min and utilized 50 and 77 individual beams for delivery, respectively. Automated delivery of the resulting optimized distributions, validated using radiochromic film measurements, revealed an average targeting accuracy of 0.32 mm, and a dosimetric delivery error along four line

  14. On the SIMS Ionization Probability of Organic Molecules.

    Science.gov (United States)

    Popczun, Nicholas J; Breuer, Lars; Wucher, Andreas; Winograd, Nicholas

    2017-06-01

    The prospect of improved secondary ion yields for secondary ion mass spectrometry (SIMS) experiments drives innovation of new primary ion sources, instrumentation, and post-ionization techniques. The largest factor affecting secondary ion efficiency is believed to be the poor ionization probability (α + ) of sputtered material, a value rarely measured directly, but estimated to be in some cases as low as 10 -5 . Our lab has developed a method for the direct determination of α + in a SIMS experiment using laser post-ionization (LPI) to detect neutral molecular species in the sputtered plume for an organic compound. Here, we apply this method to coronene (C 24 H 12 ), a polyaromatic hydrocarbon that exhibits strong molecular signal during gas-phase photoionization. A two-dimensional spatial distribution of sputtered neutral molecules is measured and presented. It is shown that the ionization probability of molecular coronene desorbed from a clean film under bombardment with 40 keV C 60 cluster projectiles is of the order of 10 -3 , with some remaining uncertainty arising from laser-induced fragmentation and possible differences in the emission velocity distributions of neutral and ionized molecules. In general, this work establishes a method to estimate the ionization efficiency of molecular species sputtered during a single bombardment event. Graphical Abstract GRAPHICAL ABSTRACT TEXT HERE] -->.

  15. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    Science.gov (United States)

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  16. Two-dimensional topological photonics

    Science.gov (United States)

    Khanikaev, Alexander B.; Shvets, Gennady

    2017-12-01

    Originating from the studies of two-dimensional condensed-matter states, the concept of topological order has recently been expanded to other fields of physics and engineering, particularly optics and photonics. Topological photonic structures have already overturned some of the traditional views on wave propagation and manipulation. The application of topological concepts to guided wave propagation has enabled novel photonic devices, such as reflection-free sharply bent waveguides, robust delay lines, spin-polarized switches and non-reciprocal devices. Discrete degrees of freedom, widely used in condensed-matter physics, such as spin and valley, are now entering the realm of photonics. In this Review, we summarize the latest advances in this highly dynamic field, with special emphasis on the experimental work on two-dimensional photonic topological structures.

  17. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  18. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  19. Structures of two-dimensional three-body systems

    International Nuclear Information System (INIS)

    Ruan, W.Y.; Liu, Y.Y.; Bao, C.G.

    1996-01-01

    Features of the structure of L = 0 states of a two-dimensional three-body model system have been investigated. Three types of permutation symmetry of the spatial part, namely symmetric, antisymmetric, and mixed, have been considered. A comparison has been made between the two-dimensional system and the corresponding three-dimensional one. The effect of symmetry on microscopic structures is emphasized. (author)

  20. A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough

    Science.gov (United States)

    Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru

    A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.

  1. Covariance problem in two-dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Hagen, C.R.

    1979-01-01

    The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case

  2. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  3. Slip-line field analysis of metal flow during two dimensional forging

    International Nuclear Information System (INIS)

    Fenton, R.G.; Khataan, H.A.

    1981-01-01

    A method of computation and a computer software package were developed for solving problems of two dimensional plastic flow between symmetrical dies of any specified shape. The load required to initiate plastic flow, the stress and velocity distributions in the plastic region of the metal, and the pressure distribution acting on the die are determined. The method can be used to solve any symmetrical plane strain flow problem regardless of the complexity of the die. The accurate solution obtained by this efficient method can provide valuable help to forging die designers. (Author) [pt

  4. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  5. X-ray imaging device for one-dimensional and two-dimensional radioscopy

    International Nuclear Information System (INIS)

    1978-01-01

    The X-ray imaging device for the selectable one-dimensional or two-dimensional pictures of objects illuminated by X-rays, comprising an X-ray source, an X-ray screen, and an opto-electrical picture development device placed behind the screen, is characterized by an anamorphotic optical system, which is positioned with a one-dimensional illumination between the X-ray screen and the opto-electrical device and that a two-dimensional illumination will be developed, and that in view of the lens system which forms part of the opto-electrical device, there is placed an X-ray screen in a specified beam direction so that a magnified image may be formed by equalisation of the distance between the X-ray screen and the lens system. (G.C.)

  6. Hamiltonian formalism of two-dimensional Vlasov kinetic equation.

    Science.gov (United States)

    Pavlov, Maxim V

    2014-12-08

    In this paper, the two-dimensional Benney system describing long wave propagation of a finite depth fluid motion and the multi-dimensional Russo-Smereka kinetic equation describing a bubbly flow are considered. The Hamiltonian approach established by J. Gibbons for the one-dimensional Vlasov kinetic equation is extended to a multi-dimensional case. A local Hamiltonian structure associated with the hydrodynamic lattice of moments derived by D. J. Benney is constructed. A relationship between this hydrodynamic lattice of moments and the two-dimensional Vlasov kinetic equation is found. In the two-dimensional case, a Hamiltonian hydrodynamic lattice for the Russo-Smereka kinetic model is constructed. Simple hydrodynamic reductions are presented.

  7. Novel target design algorithm for two-dimensional optical storage (TwoDOS)

    NARCIS (Netherlands)

    Huang, Li; Chong, T.C.; Vijaya Kumar, B.V.K.; Kobori, H.

    2004-01-01

    In this paper we introduce the Hankel transform based channel model of Two-Dimensional Optical Storage (TwoDOS) system. Based on this model, the two-dimensional (2D) minimum mean-square error (MMSE) equalizer has been derived and applied to some simple but common cases. The performance of the 2D

  8. Resonances in a two-dimensional electron waveguide with a single δ-function scatterer

    International Nuclear Information System (INIS)

    Boese, Daniel; Lischka, Markus; Reichl, L. E.

    2000-01-01

    We study the conductance properties of a straight two-dimensional electron waveguide with an s-like scatterer modeled by a single δ-function potential with a finite number of modes. Even such a simple system exhibits interesting resonance phenomena. These resonances are explained in terms of quasibound states both by using a direct solution of the Schroedinger equation and by studying the Green's function of the system. Using the Green's function we calculate the survival probability as well as the power absorption, and show the influence of the quasibound states on these two quantities. (c) 2000 The American Physical Society

  9. Probability Modeling of Precipitation Extremes over Two River Basins in Northwest of China

    Directory of Open Access Journals (Sweden)

    Zhanling Li

    2015-01-01

    Full Text Available This paper is focused on the probability modeling with a range of distribution models over two inland river basins in China, together with the estimations of return levels on various return periods. Both annual and seasonal maximum precipitations (MP are investigated based on daily precipitation data at 13 stations from 1960 to 2010 in Heihe River and Shiyang River basins. Results show that GEV, Burr, and Weibull distributions provide the best fit to both annual and seasonal MP. Exponential and Pareto 2 distributions show the worst fit. The estimated return levels for spring MP show decreasing trends from the upper to the middle and then to the lower reaches totally speaking. Summer MP approximates to annual MP both in the quantity and in the spatial distributions. Autumn MP shows a little higher value in the estimated return levels than Spring MP, while keeping consistent with spring MP in the spatial distribution. It is also found that the estimated return levels for annual MP derived from various distributions differ by 22%, 36%, and 53% on average at 20-year, 50-year, and 100-year return periods, respectively.

  10. Two-dimensional ferroelectrics

    Energy Technology Data Exchange (ETDEWEB)

    Blinov, L M; Fridkin, Vladimir M; Palto, Sergei P [A.V. Shubnikov Institute of Crystallography, Russian Academy of Sciences, Moscow, Russian Federaion (Russian Federation); Bune, A V; Dowben, P A; Ducharme, Stephen [Department of Physics and Astronomy, Behlen Laboratory of Physics, Center for Materials Research and Analysis, University of Nebraska-Linkoln, Linkoln, NE (United States)

    2000-03-31

    The investigation of the finite-size effect in ferroelectric crystals and films has been limited by the experimental conditions. The smallest demonstrated ferroelectric crystals had a diameter of {approx}200 A and the thinnest ferroelectric films were {approx}200 A thick, macroscopic sizes on an atomic scale. Langmuir-Blodgett deposition of films one monolayer at a time has produced high quality ferroelectric films as thin as 10 A, made from polyvinylidene fluoride and its copolymers. These ultrathin films permitted the ultimate investigation of finite-size effects on the atomic thickness scale. Langmuir-Blodgett films also revealed the fundamental two-dimensional character of ferroelectricity in these materials by demonstrating that there is no so-called critical thickness; films as thin as two monolayers (1 nm) are ferroelectric, with a transition temperature near that of the bulk material. The films exhibit all the main properties of ferroelectricity with a first-order ferroelectric-paraelectric phase transition: polarization hysteresis (switching); the jump in spontaneous polarization at the phase transition temperature; thermal hysteresis in the polarization; the increase in the transition temperature with applied field; double hysteresis above the phase transition temperature; and the existence of the ferroelectric critical point. The films also exhibit a new phase transition associated with the two-dimensional layers. (reviews of topical problems)

  11. Modeling the radiation transfer of discontinuous canopies: results for gap probability and single-scattering contribution

    Science.gov (United States)

    Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun

    2010-10-01

    In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.

  12. Two-dimensional hydrodynamics of uniform ion plasma in electrostatic field

    International Nuclear Information System (INIS)

    Mahdieh, M. H.; Gavili, A.

    2005-01-01

    Two-dimensional hydrodynamics of ion extraction from uniform quasi-neutral plasma, in electrostatic field has been simulated numerically. Experimentally, tunable pulsed lasers produce non-uniform plasma through stepwise photo-excitation and photo-ionization or multi-photo-ionization processes. Poisson's equation was solved simultaneously with the equations of mass, and momentum, assuming the Maxwell-Boltzmann distribution for electrons. In the calculation, the initial density profile at the boundaries has been assumed to be very steep for the ion plasma. In these calculations dynamics of electric potential and the ions density were assessed. The ion extraction time was also estimated from the calculation. The knowledge of spatial distribution of the ions across the cathode is very important for the practical purposes. In this simulation, the spatial distribution of the ion current density across the cathode as well as its temporal distribution was calculated

  13. Two-Dimensional Materials for Sensing: Graphene and Beyond

    Directory of Open Access Journals (Sweden)

    Seba Sara Varghese

    2015-09-01

    Full Text Available Two-dimensional materials have attracted great scientific attention due to their unusual and fascinating properties for use in electronics, spintronics, photovoltaics, medicine, composites, etc. Graphene, transition metal dichalcogenides such as MoS2, phosphorene, etc., which belong to the family of two-dimensional materials, have shown great promise for gas sensing applications due to their high surface-to-volume ratio, low noise and sensitivity of electronic properties to the changes in the surroundings. Two-dimensional nanostructured semiconducting metal oxide based gas sensors have also been recognized as successful gas detection devices. This review aims to provide the latest advancements in the field of gas sensors based on various two-dimensional materials with the main focus on sensor performance metrics such as sensitivity, specificity, detection limit, response time, and reversibility. Both experimental and theoretical studies on the gas sensing properties of graphene and other two-dimensional materials beyond graphene are also discussed. The article concludes with the current challenges and future prospects for two-dimensional materials in gas sensor applications.

  14. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  15. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  16. Boundary effects in a quasi-two-dimensional driven granular fluid.

    Science.gov (United States)

    Smith, N D; Smith, M I

    2017-12-01

    The effect of a confining boundary on the spatial variations in granular temperature of a driven quasi-two-dimensional layer of particles is investigated experimentally. The radial drop in the relative granular temperature ΔT/T exhibits a maximum at intermediate particle numbers which coincides with a crossover from kinetic to collisional transport of energy. It is also found that at low particle numbers, the distributions of radial velocities are increasingly asymmetric as one approaches the boundary. The radial and tangential granular temperatures split, and in the tails of the radial velocity distribution there is a higher population of fast moving particles traveling away rather than towards the boundary.

  17. Data compression and genomes: a two-dimensional life domain map.

    Science.gov (United States)

    Menconi, Giulia; Benci, Vieri; Buiatti, Marcello

    2008-07-21

    We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of non-coding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.

  18. Instantaneous three-dimensional visualization of concentration distributions in turbulent flows with crossed-plane laser-induced fluorescence imaging

    Science.gov (United States)

    Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.

    2005-01-01

    A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.

  19. Two-dimensional calculus

    CERN Document Server

    Osserman, Robert

    2011-01-01

    The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o

  20. Study on two-dimensional POISSON design of large-scale FFAG magnet

    International Nuclear Information System (INIS)

    Ouyang Huafu

    2006-01-01

    In order to decrease the edge effect of the field, the designed magnetic field distribution in a large-scale FFAG magnet is realized by both the trim coil and the shape of the magnet pole-face. Through two-dimensional POISSON simulations, the distribution about the current and the position of the trim coil and the shape of the magnet pole are determined. In order to facilitate the POISSON design, two codes are writteen to automatically adjust the current and the position of the trim coil and the shape of magnet pole-face appeared in the POISSON input file. With the two codes, the efficiency of POISSON simulations is improved and the mistakes which might occur in writing and adjusting the POISSON input file manually could be avoided. (authors)

  1. Phase transitions in two-dimensional systems

    International Nuclear Information System (INIS)

    Salinas, S.R.A.

    1983-01-01

    Some experiences are related using synchrotron radiation beams, to characterize solid-liquid (fusion) and commensurate solid-uncommensurate solid transitions in two-dimensional systems. Some ideas involved in the modern theories of two-dimensional fusion are shortly exposed. The systems treated consist of noble gases (Kr,Ar,Xe) adsorbed in the basal plane of graphite and thin films formed by some liquid crystal shells. (L.C.) [pt

  2. Simulation of the measure of the microparticle size distribution in two dimensions

    International Nuclear Information System (INIS)

    Lameiras, F.S.; Silva Neto, P.P. da

    1987-01-01

    For the nuclear ceramic industry, the determination of the porous size distribution is very important to predict the dimensional thermal stability of uranium dioxide sintered pellets. The determination of the grain size distribution is still very important to predict the operation behavior of these pellets, as well as to control the fabrication process. The Saltykov method is commonly used to determine the microparticles size distribution. A simulation for two-dimensions, using this method and the size distribution of cords to calculate the area distribution [pt

  3. Tomograms for open quantum systems: In(finite) dimensional optical and spin systems

    Energy Technology Data Exchange (ETDEWEB)

    Thapliyal, Kishore, E-mail: tkishore36@yahoo.com [Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); Banerjee, Subhashish, E-mail: subhashish@iitj.ac.in [Indian Institute of Technology Jodhpur, Jodhpur 342011 (India); Pathak, Anirban, E-mail: anirban.pathak@gmail.com [Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India)

    2016-03-15

    Tomograms are obtained as probability distributions and are used to reconstruct a quantum state from experimentally measured values. We study the evolution of tomograms for different quantum systems, both finite and infinite dimensional. In realistic experimental conditions, quantum states are exposed to the ambient environment and hence subject to effects like decoherence and dissipation, which are dealt with here, consistently, using the formalism of open quantum systems. This is extremely relevant from the perspective of experimental implementation and issues related to state reconstruction in quantum computation and communication. These considerations are also expected to affect the quasiprobability distribution obtained from experimentally generated tomograms and nonclassicality observed from them. -- Highlights: •Tomograms are constructed for open quantum systems. •Finite and infinite dimensional quantum systems are studied. •Finite dimensional systems (phase states, single & two qubit spin states) are studied. •A dissipative harmonic oscillator is considered as an infinite dimensional system. •Both pure dephasing as well as dissipation effects are studied.

  4. Tomograms for open quantum systems: In(finite) dimensional optical and spin systems

    International Nuclear Information System (INIS)

    Thapliyal, Kishore; Banerjee, Subhashish; Pathak, Anirban

    2016-01-01

    Tomograms are obtained as probability distributions and are used to reconstruct a quantum state from experimentally measured values. We study the evolution of tomograms for different quantum systems, both finite and infinite dimensional. In realistic experimental conditions, quantum states are exposed to the ambient environment and hence subject to effects like decoherence and dissipation, which are dealt with here, consistently, using the formalism of open quantum systems. This is extremely relevant from the perspective of experimental implementation and issues related to state reconstruction in quantum computation and communication. These considerations are also expected to affect the quasiprobability distribution obtained from experimentally generated tomograms and nonclassicality observed from them. -- Highlights: •Tomograms are constructed for open quantum systems. •Finite and infinite dimensional quantum systems are studied. •Finite dimensional systems (phase states, single & two qubit spin states) are studied. •A dissipative harmonic oscillator is considered as an infinite dimensional system. •Both pure dephasing as well as dissipation effects are studied.

  5. Optical image encryption based on phase retrieval combined with three-dimensional particle-like distribution

    International Nuclear Information System (INIS)

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2012-01-01

    We propose a new phase retrieval algorithm for optical image encryption in three-dimensional (3D) space. The two-dimensional (2D) plaintext is considered as a series of particles distributed in 3D space, and an iterative phase retrieval algorithm is developed to encrypt the series of particles into phase-only masks. The feasibility and effectiveness of the proposed method are demonstrated by a numerical experiment, and the advantages and security of the proposed optical cryptosystems are also analyzed and discussed. (paper)

  6. The theory of critical phenomena in two-dimensional systems

    International Nuclear Information System (INIS)

    Olvera de la C, M.

    1981-01-01

    An exposition of the theory of critical phenomena in two-dimensional physical systems is presented. The first six chapters deal with the mean field theory of critical phenomena, scale invariance of the thermodynamic functions, Kadanoff's spin block construction, Wilson's renormalization group treatment of critical phenomena in configuration space, and the two-dimensional Ising model on a triangular lattice. The second part of this work is made of four chapters devoted to the application of the ideas expounded in the first part to the discussion of critical phenomena in superfluid films, two-dimensional crystals and the two-dimensional XY model of magnetic systems. Chapters seven to ten are devoted to the following subjects: analysis of long range order in one, two, and three-dimensional physical systems. Topological defects in the XY model, in superfluid films and in two-dimensional crystals. The Thouless-Kosterlitz iterated mean field theory of the dipole gas. The renormalization group treatment of the XY model, superfluid films and two-dimensional crystal. (author)

  7. The Class of (p,q-spherical Distributions with an Extension of the Sector and Circle Number Functions

    Directory of Open Access Journals (Sweden)

    Wolf-Dieter Richter

    2017-07-01

    Full Text Available For evaluating the probabilities of arbitrary random events with respect to a given multivariate probability distribution, specific techniques are of great interest. An important two-dimensional high risk limit law is the Gauss-exponential distribution whose probabilities can be dealt with based on the Gauss–Laplace law. The latter will be considered here as an element of the newly-introduced family of ( p , q -spherical distributions. Based on a suitably-defined non-Euclidean arc-length measure on ( p , q -circles, we prove geometric and stochastic representations of these distributions and correspondingly distributed random vectors, respectively. These representations allow dealing with the new probability measures similarly to with elliptically-contoured distributions and more general homogeneous star-shaped ones. This is demonstrated by the generalization of the Box–Muller simulation method. In passing, we prove an extension of the sector and circle number functions.

  8. Convex and Radially Concave Contoured Distributions

    Directory of Open Access Journals (Sweden)

    Wolf-Dieter Richter

    2015-01-01

    Full Text Available Integral representations of the locally defined star-generalized surface content measures on star spheres are derived for boundary spheres of balls being convex or radially concave with respect to a fan in Rn. As a result, the general geometric measure representation of star-shaped probability distributions and the general stochastic representation of the corresponding random vectors allow additional specific interpretations in the two mentioned cases. Applications to estimating and testing hypotheses on scaling parameters are presented, and two-dimensional sample clouds are simulated.

  9. An analytical discrete-ordinates solution for an improved one-dimensional model of three-dimensional transport in ducts

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    2015-01-01

    Highlights: • An improved 1-D model of 3-D particle transport in ducts is studied. • The cases of isotropic and directional incidence are treated with the ADO method. • Accurate numerical results are reported for ducts of circular cross section. • A comparison with results of other authors is included. • The ADO method is found to be very efficient. - Abstract: An analytical discrete-ordinates solution is developed for the problem of particle transport in ducts, as described by a one-dimensional model constructed with two basis functions. Two types of particle incidence are considered: isotropic incidence and incidence described by the Dirac delta distribution. Accurate numerical results are tabulated for the reflection probabilities of semi-infinite ducts and the reflection and transmission probabilities of finite ducts. It is concluded that the developed solution is more efficient than commonly used numerical implementations of the discrete-ordinates method.

  10. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  11. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  12. A Bloch modal approach for engineering waveguide and cavity modes in two-dimensional photonic crystals

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    uses no external excitation and determines the quasi-normal modes as unity eigenvalues of the cavity roundtrip matrix. We demonstrate the method and the quasi-normal modes for two types of two-dimensional photonic crystal structures, and discuss the quasi-normal mode eld distributions and Q-factors...

  13. The probability of an encounter of two Brownian particles before escape

    International Nuclear Information System (INIS)

    Holcman, D; Kupka, I

    2009-01-01

    We study the probability of meeting of two Brownian particles before one of them exits a finite interval. We obtain an explicit expression for the probability as a function of the initial distance between the two particles using the Weierstrass elliptic function. We also find the law of the meeting location. Brownian simulations show the accuracy of our analysis. Finally, we discuss some applications to the probability that a double-strand DNA break repairs in confined environments.

  14. Two dimensional numerical model for steam--water flow in a sudden contraction

    International Nuclear Information System (INIS)

    Crowe, C.T.; Choi, H.N.

    1976-01-01

    A computational model developed for two-dimensional dispersed two-phase flows is applied to steam--water flow in a sudden contraction. The calculational scheme utilizes the cellular approach in which each cell is regarded as a control volume and the droplets are regarded as sources of mass, momentum and energy to the conveying (steam) phase. The predictions show how droplets channel in the entry region and affect the velocity and pressure distributions along the duct

  15. Visualization and quantification of three-dimensional distribution of yeast in bread dough.

    Science.gov (United States)

    Maeda, Tatsuro; DO, Gab-Soo; Sugiyama, Junichi; Araki, Tetsuya; Tsuta, Mizuki; Shiraga, Seizaburo; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki

    2009-07-01

    A three-dimensional (3-D) bio-imaging technique was developed for visualizing and quantifying the 3-D distribution of yeast in frozen bread dough samples in accordance with the progress of the mixing process of the samples, applying cell-surface engineering to the surfaces of the yeast cells. The fluorescent yeast was recognized as bright spots at the wavelength of 520 nm. Frozen dough samples were sliced at intervals of 1 microm by an micro-slicer image processing system (MSIPS) equipped with a fluorescence microscope for acquiring cross-sectional images of the samples. A set of successive two-dimensional images was reconstructed to analyze the 3-D distribution of the yeast. The average shortest distance between centroids of enhanced green fluorescent protein (EGFP) yeasts was 10.7 microm at the pick-up stage, 9.7 microm at the clean-up stage, 9.0 microm at the final stage, and 10.2 microm at the over-mixing stage. The results indicated that the distribution of the yeast cells was the most uniform in the dough of white bread at the final stage, while the heterogeneous distribution at the over-mixing stage was possibly due to the destruction of the gluten network structure within the samples.

  16. Computation of two-dimensional isothermal flow in shell-and-tube heat exchangers

    International Nuclear Information System (INIS)

    Carlucci, L.N.; Galpin, P.F.; Brown, J.D.; Frisina, V.

    1983-07-01

    A computational procedure is outlined whereby two-dimensional isothermal shell-side flow distributions can be calculated for tube bundles having arbitrary boundaries and flow blocking devices, such as sealing strips, defined in arbitrary locations. The procedure is described in some detail and several computed results are presented to illustrate the robustness and generality of the method

  17. Conduction in rectangular quasi-one-dimensional and two-dimensional random resistor networks away from the percolation threshold.

    Science.gov (United States)

    Kiefer, Thomas; Villanueva, Guillermo; Brugger, Jürgen

    2009-08-01

    In this study we investigate electrical conduction in finite rectangular random resistor networks in quasione and two dimensions far away from the percolation threshold p(c) by the use of a bond percolation model. Various topologies such as parallel linear chains in one dimension, as well as square and triangular lattices in two dimensions, are compared as a function of the geometrical aspect ratio. In particular we propose a linear approximation for conduction in two-dimensional systems far from p(c), which is useful for engineering purposes. We find that the same scaling function, which can be used for finite-size scaling of percolation thresholds, also applies to describe conduction away from p(c). This is in contrast to the quasi-one-dimensional case, which is highly nonlinear. The qualitative analysis of the range within which the linear approximation is legitimate is given. A brief link to real applications is made by taking into account a statistical distribution of the resistors in the network. Our results are of potential interest in fields such as nanostructured or composite materials and sensing applications.

  18. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    Energy Technology Data Exchange (ETDEWEB)

    Korhonen, Marko [Department of Mathematics and Statistics, University of Helsinki, FIN-00014 (Finland); Lee, Eunghyun [Centre de Recherches Mathématiques (CRM), Université de Montréal, Quebec H3C 3J7 (Canada)

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle's position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.

  19. Disorder-induced modification of the transmission of light through two-dimensional photonic crystals

    International Nuclear Information System (INIS)

    Beggs, D M; Kaliteevski, M A; Abram, R A; Cassagne, D; Albert, J P

    2005-01-01

    Disordered two-dimensional photonic crystals with a complete photonic band-gap have been investigated. Transmission and reflection spectra have been modelled for both ballistic and scattered light. The density of states and electromagnetic field profiles of disorder-induced localized states have also been calculated, for various levels of disorder. It is found that there is a threshold-like behaviour in the amount of disorder. Below the threshold, it is seen that there is a vanishing probability of disorder-induced localized states being introduced into the centre of the photonic band-gap, but that edge-states narrow the band-gap. Above the threshold, there is a non-zero probability of disorder-induced localized states throughout the photonic band-gap, and the modification of the transmission and reflection spectra due to disorder rapidly increases with increasing disorder

  20. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  1. Two- and three-dimensional CT analysis of ankle fractures

    International Nuclear Information System (INIS)

    Magid, D.; Fishman, E.K.; Ney, D.R.; Kuhlman, J.E.

    1988-01-01

    CT with coronal and sagittal reformatting (two-dimensional CT) and animated volumetric image rendering (three-dimensional CT) was used to assess ankle fractures. Partial volume limits transaxial CT in assessments of horizontally oriented structures. Two-dimensional CT, being orthogonal to the plafond, superior mortise, talar dome, and tibial epiphysis, often provides the most clinically useful images. Two-dimensional CT is most useful in characterizing potentially confusing fractures, such as Tillaux (anterior tubercle), triplane, osteochondral talar dome, or nondisplaced talar neck fractures, and it is the best study to confirm intraarticular fragments. Two-and three-dimensional CT best indicate the percentage of articular surface involvement and best demonstrate postoperative results or complications (hardware migration, residual step-off, delayed union, DJD, AVN, etc). Animated three-dimensional images are the preferred means of integrating the two-dimensional findings for surgical planning, as these images more closely simulate the clinical problem

  2. On two-dimensionalization of three-dimensional turbulence in shell models

    DEFF Research Database (Denmark)

    Chakraborty, Sagar; Jensen, Mogens Høgh; Sarkar, A.

    2010-01-01

    Applying a modified version of the Gledzer-Ohkitani-Yamada (GOY) shell model, the signatures of so-called two-dimensionalization effect of three-dimensional incompressible, homogeneous, isotropic fully developed unforced turbulence have been studied and reproduced. Within the framework of shell m......-similar PDFs for longitudinal velocity differences are also presented for the rotating 3D turbulence case....

  3. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  4. Two-dimensional turbulent convection

    Science.gov (United States)

    Mazzino, Andrea

    2017-11-01

    We present an overview of the most relevant, and sometimes contrasting, theoretical approaches to Rayleigh-Taylor and mean-gradient-forced Rayleigh-Bénard two-dimensional turbulence together with numerical and experimental evidences for their support. The main aim of this overview is to emphasize that, despite the different character of these two systems, especially in relation to their steadiness/unsteadiness, turbulent fluctuations are well described by the same scaling relationships originated from the Bolgiano balance. The latter states that inertial terms and buoyancy terms balance at small scales giving rise to an inverse kinetic energy cascade. The main difference with respect to the inverse energy cascade in hydrodynamic turbulence [R. H. Kraichnan, "Inertial ranges in two-dimensional turbulence," Phys. Fluids 10, 1417 (1967)] is that the rate of cascade of kinetic energy here is not constant along the inertial range of scales. Thanks to the absence of physical boundaries, the two systems here investigated turned out to be a natural physical realization of the Kraichnan scaling regime hitherto associated with the elusive "ultimate state of thermal convection" [R. H. Kraichnan, "Turbulent thermal convection at arbitrary Prandtl number," Phys. Fluids 5, 1374-1389 (1962)].

  5. Entropy of Bit-Stuffing-Induced Measures for Two-Dimensional Checkerboard Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Vaarby, Torben Strange

    2007-01-01

    A modified bit-stuffing scheme for two-dimensional (2-D) checkerboard constraints is introduced. The entropy of the scheme is determined based on a probability measure defined by the modified bit-stuffing. Entropy results of the scheme are given for 2-D constraints on a binary alphabet....... The constraints considered are 2-D RLL (d, infinity) for d = 2, 3 and 4 as well as for the constraint with a minimum 1-norm distance of 3 between Is. For these results the entropy is within 1-2% of an upper bound on the capacity for the constraint. As a variation of the scheme, periodic merging arrays are also...

  6. Effect of Rotation for Two-Temperature Generalized Thermoelasticity of Two-Dimensional under Thermal Shock Problem

    Directory of Open Access Journals (Sweden)

    Kh. Lotfy

    2013-01-01

    Full Text Available The theory of two-temperature generalized thermoelasticity based on the theory of Youssef is used to solve boundary value problems of two-dimensional half-space. The governing equations are solved using normal mode method under the purview of the Lord-Şhulman (LS and the classical dynamical coupled theory (CD. The general solution obtained is applied to a specific problem of a half-space subjected to one type of heating, the thermal shock type. We study the influence of rotation on the total deformation of thermoelastic half-space and the interaction with each other under the influence of two temperature theory. The material is homogeneous isotropic elastic half-space. The methodology applied here is use of the normal mode analysis techniques that are used to solve the resulting nondimensional coupled field equations for the two theories. Numerical results for the displacement components, force stresses, and temperature distribution are presented graphically and discussed. The conductive temperature, the dynamical temperature, the stress, and the strain distributions are shown graphically with some comparisons.

  7. Two dimensional kicked quantum Ising model: dynamical phase transitions

    International Nuclear Information System (INIS)

    Pineda, C; Prosen, T; Villaseñor, E

    2014-01-01

    Using an efficient one and two qubit gate simulator operating on graphical processing units, we investigate ergodic properties of a quantum Ising spin 1/2 model on a two-dimensional lattice, which is periodically driven by a δ-pulsed transverse magnetic field. We consider three different dynamical properties: (i) level density, (ii) level spacing distribution of the Floquet quasienergy spectrum, and (iii) time-averaged autocorrelation function of magnetization components. Varying the parameters of the model, we found transitions between ordered (non-ergodic) and quantum chaotic (ergodic) phases, but the transitions between flat and non-flat spectral density do not correspond to transitions between ergodic and non-ergodic local observables. Even more surprisingly, we found good agreement of level spacing distribution with the Wigner surmise of random matrix theory for almost all values of parameters except where the model is essentially non-interacting, even in regions where local observables are not ergodic or where spectral density is non-flat. These findings question the versatility of the interpretation of level spacing distribution in many-body systems and stress the importance of the concept of locality. (paper)

  8. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  9. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  10. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  11. Autocorrelation based reconstruction of two-dimensional binary objects

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.; Castaneda, R.

    2005-10-01

    A method for reconstructing two-dimensional binary objects from its autocorrelation function is discussed. The objects consist of a finite set of identical elements. The reconstruction algorithm is based on the concept of class of element pairs, defined as the set of element pairs with the same separation vector. This concept allows to solve the redundancy introduced by the element pairs of each class. It is also shown that different objects, consisting of an equal number of elements and the same classes of pairs, provide Fraunhofer diffraction patterns with identical intensity distributions. However, the method predicts all the possible objects that produce the same Fraunhofer pattern. (author)

  12. Advances in delimiting the Hilbert-Schmidt separability probability of real two-qubit systems

    International Nuclear Information System (INIS)

    Slater, Paul B

    2010-01-01

    We seek to derive the probability-expressed in terms of the Hilbert-Schmidt (Euclidean or flat) metric-that a generic (nine-dimensional) real two-qubit system is separable, by implementing the well-known Peres-Horodecki test on the partial transposes (PTs) of the associated 4 x 4 density matrices (ρ). But the full implementation of the test-requiring that the determinant of the PT be nonnegative for separability to hold-appears to be, at least presently, computationally intractable. So, we have previously implemented-using the auxiliary concept of a diagonal-entry-parameterized separability function (DESF)-the weaker implied test of nonnegativity of the six 2 x 2 principal minors of the PT. This yielded an exact upper bound on the separability probability of 1024/135π 2 ∼0.76854. Here, we piece together (reflection-symmetric) results obtained by requiring that each of the four 3 x 3 principal minors of the PT, in turn, be nonnegative, giving an improved/reduced upper bound of 22/35∼0.628571. Then, we conclude that a still further improved upper bound of 1129/2100∼0.537619 can be found by similarly piecing together the (reflection-symmetric) results of enforcing the simultaneous nonnegativity of certain pairs of the four 3 x 3 principal minors. Numerical simulations-as opposed to exact symbolic calculations-indicate, on the other hand, that the true probability is certainly less than 1/2 . Our analyses lead us to suggest a possible form for the true DESF, yielding a separability probability of 29/64∼0.453125, while the absolute separability probability of (6928-2205π)/(2 9/2 )∼0.0348338 provides the best exact lower bound established so far. In deriving our improved upper bounds, we rely repeatedly upon the use of certain integrals over cubes that arise. Finally, we apply an independence assumption to a pair of DESFs that comes close to reproducing our numerical estimate of the true separability function.

  13. Multi-perspective views of students’ difficulties with one-dimensional vector and two-dimensional vector

    Science.gov (United States)

    Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno

    2017-01-01

    Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.

  14. Correlation between DNAPL distribution area and dissolved concentration in surfactant enhanced aquifer remediation effluent: a two-dimensional flow cell study

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Bin; Li, Huiying; Du, Xiaoming; Zhong, Lirong; Yang, Bin; Du, Ping; Gu, Qingbao; Li, Fasheng

    2016-02-01

    During the process of surfactant enhanced aquifer remediation (SEAR), free phase dense non-aqueous phase liquid (DNAPL) may be mobilized and spread. The understanding of the impact of DNAPL spreading on the SEAR remediation is not sufficient with its positive effect infrequently mentioned. To evaluate the correlation between DNAPL spreading and remediation efficiency, a two-dimensional sandbox apparatus was used to simulate the migration and dissolution process of 1,2-DCA (1,2-dichloroethane) DNAPL in SEAR. Distribution area of DNAPL in the sandbox was determined by digital image analysis and correlated with effluent DNAPL concentration. The results showed that the effluent DNAPL concentration has significant positive linear correlation with the DNAPL distribution area, indicating the mobilization of DNAPL could improve remediation efficiency by enlarging total NAPL-water interfacial area for mass transfer. Meanwhile, the vertical migration of 1,2-DCA was limited within the boundary of aquifer in all experiments, implying that by manipulating injection parameters in SEAR, optimal remediation efficiency can be reached while the risk of DNAPL vertical migration is minimized. This study provides a convenient visible and quantitative method for the optimization of parameters for SEAR project, and an approach of rapid predicting the extent of DNAPL contaminant distribution based on the dissolved DNAPL concentration in the extraction well.

  15. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  16. A 3D Polymer Based Printed Two-Dimensional Laser Scanner

    International Nuclear Information System (INIS)

    Oyman, H A; Yalcinkaya, A D; Gokdel, Y D; Ferhanoglu, O

    2016-01-01

    A two-dimensional (2D) polymer based scanning mirror with magnetic actuation is developed for imaging applications. Proposed device consists of a circular suspension holding a rectangular mirror and can generate a 2D scan pattern. Three dimensional (3D) printing technology which is used for implementation of the device, offers added flexibility in controlling the cross-sectional profile as well as the stress distribution compared to the traditional planar process technologies. The mirror device is developed to meet a portable, miniaturized confocal microscope application in mind, delivering 4.5 and 4.8 degrees of optical scan angles at 111 and 267 Hz, respectively. As a result of this mechanical performance, the resulting microscope incorporating the mirror is estimated to accomplish a field of view (FOV) of 350 µm × 350 µm. (paper)

  17. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  18. Two-dimensional liquid chromatography

    DEFF Research Database (Denmark)

    Græsbøll, Rune

    -dimensional separation space. Optimization of gradients in online RP×RP is more difficult than in normal HPLC as a result of the increased number of parameters and their influence on each other. Modeling the coverage of the compounds across the two-dimensional chromatogram as a result of a change in gradients could...... be used for optimization purposes, and reduce the time spend on optimization. In this thesis (chapter 6), and manuscript B, a measure of the coverage of the compounds in the twodimensional separation space is defined. It is then shown that this measure can be modeled for changes in the gradient in both...

  19. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  20. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions.

    Science.gov (United States)

    Wenger, Seth J; Freeman, Mary C

    2008-10-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  1. Analysis of two-dimensional elemental maps in adult and middle-aged female and male Wistar rats by X-ray microfluorescence with synchrotron radiation

    International Nuclear Information System (INIS)

    Barbosa, R.F.; Anjos, M.J.; Jesus, E.F.O. de; Lopes, R.T.; Oliveira, L.F. de; Carmo, M.G.T. do; Rocha, M.S.; Martinez, A.M.B.

    2008-01-01

    Full text: There are few methods available to measure the spatial (two (three)-dimensional) elemental distribution in animal brain. X-Ray Microfluorescence with Synchrotron Radiation is a multielemental mapping technique, which was used in this work to determine the two-dimensional maps of phosphorous (P), chlorine (Cl), potassium (K), iron (Fe), copper (Cu) and zinc (Zn) in coronal sections of adult (60 days old) and middle aged (20 months old) female (n = 4) and male (n = 4) Wistar rats. The measurements were carried out at the XRF beam line at the Synchrotron Light National Laboratory (Campinas, Brazil). A two-dimensional scanning was performed in order to study the tendency of elemental concentration variation and the elemental distribution. The acquisition time for each pixel was 10 s/step and the step size was 300 μm/step in both directions. It was observed that P levels decreased with advancing age in female rats, but, on the other hand, these levels increased with advancing age in male rats. K, Fe and Cu levels increased in female and male middle-aged rats in the same ways as P and Cl levels (only in male animals). In addition to this, Fe levels were higher in females rats than males ones. However, in relation to P and K distributions, they were homogeneous in the entire brain section, independently of the gender and age. Cl distribution was more pronounced in cortical areas, hippocampus and thalamus for all the animals studied, except for the middle-aged female rats. Fe distribution was more conspicuous in the thalamus, hypothalamus and cortical area. Moreover, Zn distributions are in good concern with the results reported by the literature, being more intense in the hippocampus. Our results showed that an increase of Fe, Cu and Zn with aging can be related to the development of some neurodegenerative disorders, since the literature reports an increase of these elements in Parkinson disease, Alzheimer disease and Wilson Disease. Therefore, we can see that

  2. Ultrafast carrier thermalization in lead iodide perovskite probed with two-dimensional electronic spectroscopy.

    Science.gov (United States)

    Richter, Johannes M; Branchi, Federico; Valduga de Almeida Camargo, Franco; Zhao, Baodan; Friend, Richard H; Cerullo, Giulio; Deschler, Felix

    2017-08-29

    In band-like semiconductors, charge carriers form a thermal energy distribution rapidly after optical excitation. In hybrid perovskites, the cooling of such thermal carrier distributions occurs on timescales of about 300 fs via carrier-phonon scattering. However, the initial build-up of the thermal distribution proved difficult to resolve with pump-probe techniques due to the requirement of high resolution, both in time and pump energy. Here, we use two-dimensional electronic spectroscopy with sub-10 fs resolution to directly observe the carrier interactions that lead to a thermal carrier distribution. We find that thermalization occurs dominantly via carrier-carrier scattering under the investigated fluences and report the dependence of carrier scattering rates on excess energy and carrier density. We extract characteristic carrier thermalization times from below 10 to 85 fs. These values allow for mobilities of 500 cm 2  V -1  s -1 at carrier densities lower than 2 × 10 19  cm -3 and limit the time for carrier extraction in hot carrier solar cells.Carrier-carrier scattering rates determine the fundamental limits of carrier transport and electronic coherence. Using two-dimensional electronic spectroscopy with sub-10 fs resolution, Richter and Branchi et al. extract carrier thermalization times of 10 to 85 fs in hybrid perovskites.

  3. Extreme points of the convex set of joint probability distributions with ...

    Indian Academy of Sciences (India)

    Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.

  4. Chaotic dynamics in two-dimensional noninvertible maps

    CERN Document Server

    Mira, Christian; Cathala, Jean-Claude; Gardini, Laura

    1996-01-01

    This book is essentially devoted to complex properties (Phase plane structure and bifurcations) of two-dimensional noninvertible maps, i.e. maps having either a non-unique inverse, or no real inverse, according to the plane point. They constitute models of sets of discrete dynamical systems encountered in Engineering (Control, Signal Processing, Electronics), Physics, Economics, Life Sciences. Compared to the studies made in the one-dimensional case, the two-dimensional situation remained a long time in an underdeveloped state. It is only since these last years that the interest for this resea

  5. Application of a method for comparing one-dimensional and two-dimensional models of a ground-water flow system

    International Nuclear Information System (INIS)

    Naymik, T.G.

    1978-01-01

    To evaluate the inability of a one-dimensional ground-water model to interact continuously with surrounding hydraulic head gradients, simulations using one-dimensional and two-dimensional ground-water flow models were compared. This approach used two types of models: flow-conserving one-and-two dimensional models, and one-dimensional and two-dimensional models designed to yield two-dimensional solutions. The hydraulic conductivities of controlling features were varied and model comparison was based on the travel times of marker particles. The solutions within each of the two model types compare reasonably well, but a three-dimensional solution is required to quantify the comparison

  6. Reflectance distribution in optimal transmittance cavities: The remains of a higher dimensional space

    International Nuclear Information System (INIS)

    Naumis, Gerardo G.; Bazan, A.; Torres, M.; Aragon, J.L.; Quintero-Torres, R.

    2008-01-01

    One of the few examples in which the physical properties of an incommensurable system reflect an underlying higher dimensionality is presented. Specifically, we show that the reflectivity distribution of an incommensurable one-dimensional cavity is given by the density of states of a tight-binding Hamiltonian in a two-dimensional triangular lattice. Such effect is due to an independent phase decoupling of the scattered waves, produced by the incommensurable nature of the system, which mimics a random noise generator. This principle can be applied to design a cavity that avoids resonant reflections for almost any incident wave. An optical analogy, by using three mirrors with incommensurable distances between them, is also presented. Such array produces a countable infinite fractal set of reflections, a phenomena which is opposite to the effect of optical invisibility

  7. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  8. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  9. Two-dimensional analytic weighting functions for limb scattering

    Science.gov (United States)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  10. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  11. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  12. Joint probability distributions and fluctuation theorems

    International Nuclear Information System (INIS)

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  13. Extending models for two-dimensional constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    Random fields in two dimensions may be specified on 2 times 2 elements such that the probabilities of finite configurations and the entropy may be calculated explicitly. The Pickard random field is one example where probability of a new (non-boundary) element is conditioned on three previous elem...

  14. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  15. The probability distribution of intergranular stress corrosion cracking life for sensitized 304 stainless steels in high temperature, high purity water

    International Nuclear Information System (INIS)

    Akashi, Masatsune; Kenjyo, Takao; Matsukura, Shinji; Kawamoto, Teruaki

    1984-01-01

    In order to discuss the probability distribution of intergranular stress corrsion carcking life for sensitized 304 stainless steels, a series of the creviced bent beem (CBB) and the uni-axial constant load tests were carried out in oxygenated high temperature, high purity water. The following concludions were resulted; (1) The initiation process of intergranular stress corrosion cracking has been assumed to be approximated by the Poisson stochastic process, based on the CBB test results. (2) The probability distribution of intergranular stress corrosion cracking life may consequently be approximated by the exponential probability distribution. (3) The experimental data could be fitted to the exponential probability distribution. (author)

  16. The simulation of a two-dimensional (2D) transport problem in a rectangular region with Lattice Boltzmann method with two-relaxation-time

    Science.gov (United States)

    Sugiyanto, S.; Hardyanto, W.; Marwoto, P.

    2018-03-01

    Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.

  17. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    Science.gov (United States)

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.

  18. Two-dimensional concentrated-stress low-frequency piezoelectric vibration energy harvesters

    Energy Technology Data Exchange (ETDEWEB)

    Sharpes, Nathan [Center for Energy Harvesting Materials and Systems (CEHMS), Virginia Tech, Blacksburg, Virginia 24061 (United States); Abdelkefi, Abdessattar [Department of Mechanical and Aerospace Engineering, New Mexico State University, Las Cruces, New Mexico 88003 (United States); Priya, Shashank [Center for Energy Harvesting Materials and Systems (CEHMS), Virginia Tech, Blacksburg, Virginia 24061 (United States); Bio-Inspired Materials and Devices Laboratory (BMDL), Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-08-31

    Vibration-based energy harvesters using piezoelectric materials have long made use of the cantilever beam structure. Surmounting the deficiencies in one-dimensional cantilever-based energy harvesters has been a major focus in the literature. In this work, we demonstrate a strategy of using two-dimensional beam shapes to harvest energy from low frequency excitations. A characteristic Zigzag-shaped beam is created to compare against the two proposed two-dimensional beam shapes, all of which occupy a 25.4 × 25.4 mm{sup 2} area. In addition to maintaining the low-resonance bending frequency, the proposed beam shapes are designed with the goal of realizing a concentrated stress structure, whereby stress in the beam is concentrated in a single area where a piezoelectric layer may be placed, rather than being distributed throughout the beam. It is shown analytically, numerically, and experimentally that one of the proposed harvesters is able to provide significant increase in power production, when the base acceleration is set equal to 0.1 g, with only a minimal change in the resonant frequency compared to the current state-of-the-art Zigzag shape. This is accomplished by eliminating torsional effects, producing a more pure bending motion that is necessary for high electromechanical coupling. In addition, the proposed harvesters have a large effective beam tip whereby large tip mass may be placed while retaining a low-profile, resulting in a low volume harvester and subsequently large power density.

  19. Two-dimensional thermal-hydraulic behavior in core in SCTF Core-II forced feed reflood tests

    International Nuclear Information System (INIS)

    Iwamura, Takamichi; Sobajima, Makoto; Okubo, Tsutomu; Ohnuki, Akira; Abe, Yutaka; Adachi, Hiromichi

    1987-01-01

    Major purpose of the Slab Core Test Program is to investigate the two-dimensional thermal-hydraulic behavior in the core during the reflood phase of a PWR-LOCA. It was revealed in the previous Slab Core Test Facility (SCTF) Core-II test results that the heat transfer was enhanced in the higher power bundles and degraded in the lower power bundles in the non-uniform radial power profile tests. In order to separately evaluate the effect of the radial power (Q) distribution itself and the effect of the radial temperature (T) distribution, four tests were performed with steep Q and T, flat Q and T, steep Q and flat T, and flat Q and steep T. Based on the test results, it was concluded that the radial temperature distribution which accompanied the radial power distribution was the dominant factor of the two-dimensional thermal-hydraulic behavior in the core during the initial period. Selected data from these four tests are also presented in this report. Some data from Test S2-12 (steep Q, T) were compared with TRAC post-test calculations performed by the Los Alamos National Laboratory. (author)

  20. Collective probabilities algorithm for surface hopping calculations

    International Nuclear Information System (INIS)

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  1. Towards a theoretical determination of the geographical probability distribution of meteoroid impacts on Earth

    Science.gov (United States)

    Zuluaga, Jorge I.; Sucerquia, Mario

    2018-06-01

    Tunguska and Chelyabinsk impact events occurred inside a geographical area of only 3.4 per cent of the Earth's surface. Although two events hardly constitute a statistically significant demonstration of a geographical pattern of impacts, their spatial coincidence is at least tantalizing. To understand if this concurrence reflects an underlying geographical and/or temporal pattern, we must aim at predicting the spatio-temporal distribution of meteoroid impacts on Earth. For this purpose we designed, implemented, and tested a novel numerical technique, the `Gravitational Ray Tracing' (GRT) designed to compute the relative impact probability (RIP) on the surface of any planet. GRT is inspired by the so-called ray-casting techniques used to render realistic images of complex 3D scenes. In this paper we describe the method and the results of testing it at the time of large impact events. Our findings suggest a non-trivial pattern of impact probabilities at any given time on the Earth. Locations at 60-90° from the apex are more prone to impacts, especially at midnight. Counterintuitively, sites close to apex direction have the lowest RIP, while in the antapex RIP are slightly larger than average. We present here preliminary maps of RIP at the time of Tunguska and Chelyabinsk events and found no evidence of a spatial or temporal pattern, suggesting that their coincidence was fortuitous. We apply the GRT method to compute theoretical RIP at the location and time of 394 large fireballs. Although the predicted spatio-temporal impact distribution matches marginally the observed events, we successfully predict their impact speed distribution.

  2. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  3. Engineering topological edge states in two dimensional magnetic photonic crystal

    Science.gov (United States)

    Yang, Bing; Wu, Tong; Zhang, Xiangdong

    2017-01-01

    Based on a perturbative approach, we propose a simple and efficient method to engineer the topological edge states in two dimensional magnetic photonic crystals. The topological edge states in the microstructures can be constructed and varied by altering the parameters of the microstructure according to the field-energy distributions of the Bloch states at the related Bloch wave vectors. The validity of the proposed method has been demonstrated by exact numerical calculations through three concrete examples. Our method makes the topological edge states "designable."

  4. An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows

    Energy Technology Data Exchange (ETDEWEB)

    Snider, D.M. [SAIC, Albuquerque, NM (United States); O`Rourke, P.J. [Los Alamos National Lab., NM (United States); Andrews, M.J. [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering

    1997-06-01

    A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles, with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.

  5. On the issues of probability distribution of GPS carrier phase observations

    Science.gov (United States)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  6. Functional inks and printing of two-dimensional materials.

    Science.gov (United States)

    Hu, Guohua; Kang, Joohoon; Ng, Leonard W T; Zhu, Xiaoxi; Howe, Richard C T; Jones, Christopher G; Hersam, Mark C; Hasan, Tawfique

    2018-05-08

    Graphene and related two-dimensional materials provide an ideal platform for next generation disruptive technologies and applications. Exploiting these solution-processed two-dimensional materials in printing can accelerate this development by allowing additive patterning on both rigid and conformable substrates for flexible device design and large-scale, high-speed, cost-effective manufacturing. In this review, we summarise the current progress on ink formulation of two-dimensional materials and the printable applications enabled by them. We also present our perspectives on their research and technological future prospects.

  7. Novel Radiobiological Gamma Index for Evaluation of 3-Dimensional Predicted Dose Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, Iori, E-mail: sumida@radonc.med.osaka-u.ac.jp [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Yamaguchi, Hajime; Kizaki, Hisao; Aboshi, Keiko; Tsujii, Mari; Yoshikawa, Nobuhiko; Yamada, Yuji [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Suzuki, Osamu; Seo, Yuji [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Isohashi, Fumiaki [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan); Yoshioka, Yasuo [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Ogawa, Kazuhiko [Department of Radiation Oncology, NTT West Osaka Hospital, Osaka (Japan)

    2015-07-15

    Purpose: To propose a gamma index-based dose evaluation index that integrates the radiobiological parameters of tumor control (TCP) and normal tissue complication probabilities (NTCP). Methods and Materials: Fifteen prostate and head and neck (H&N) cancer patients received intensity modulated radiation therapy. Before treatment, patient-specific quality assurance was conducted via beam-by-beam analysis, and beam-specific dose error distributions were generated. The predicted 3-dimensional (3D) dose distribution was calculated by back-projection of relative dose error distribution per beam. A 3D gamma analysis of different organs (prostate: clinical [CTV] and planned target volumes [PTV], rectum, bladder, femoral heads; H&N: gross tumor volume [GTV], CTV, spinal cord, brain stem, both parotids) was performed using predicted and planned dose distributions under 2%/2 mm tolerance and physical gamma passing rate was calculated. TCP and NTCP values were calculated for voxels with physical gamma indices (PGI) >1. We propose a new radiobiological gamma index (RGI) to quantify the radiobiological effects of TCP and NTCP and calculate radiobiological gamma passing rates. Results: The mean RGI gamma passing rates for prostate cases were significantly different compared with those of PGI (P<.03–.001). The mean RGI gamma passing rates for H&N cases (except for GTV) were significantly different compared with those of PGI (P<.001). Differences in gamma passing rates between PGI and RGI were due to dose differences between the planned and predicted dose distributions. Radiobiological gamma distribution was visualized to identify areas where the dose was radiobiologically important. Conclusions: RGI was proposed to integrate radiobiological effects into PGI. This index would assist physicians and medical physicists not only in physical evaluations of treatment delivery accuracy, but also in clinical evaluations of predicted dose distribution.

  8. Three-dimensional multi-relaxation-time lattice Boltzmann front-tracking method for two-phase flow

    International Nuclear Information System (INIS)

    Xie Hai-Qiong; Zeng Zhong; Zhang Liang-Qi

    2016-01-01

    We developed a three-dimensional multi-relaxation-time lattice Boltzmann method for incompressible and immiscible two-phase flow by coupling with a front-tracking technique. The flow field was simulated by using an Eulerian grid, an adaptive unstructured triangular Lagrangian grid was applied to track explicitly the motion of the two-fluid interface, and an indicator function was introduced to update accurately the fluid properties. The surface tension was computed directly on a triangular Lagrangian grid, and then the surface tension was distributed to the background Eulerian grid. Three benchmarks of two-phase flow, including the Laplace law for a stationary drop, the oscillation of a three-dimensional ellipsoidal drop, and the drop deformation in a shear flow, were simulated to validate the present model. (paper)

  9. K-FIX: a computer program for transient, two-dimensional, two-fluid flow. THREED: an extension of the K-FIX code for three-dimensional calculations

    International Nuclear Information System (INIS)

    Rivard, W.C.; Torrey, M.D.

    1978-10-01

    The transient, two-dimensional, two-fluid code K-FIX has been extended to perform three-dimensional calculations. This capability is achieved by adding five modification sets of FORTRAN statements to the basic two-dimensional code. The modifications are listed and described, and a complete listing of the three-dimensional code is provided. Results of an example problem are provided for verification

  10. Dose response study of PVA-Fx gel for three dimensional dose distribution

    International Nuclear Information System (INIS)

    Brindha, S.; Ayyangar, Komanduri M.; Shen, Bin; Saw, Cheng B.

    2001-01-01

    Modern radiotherapy techniques involve complex field arrangements using conformal and intensity modulated radiation that requires three dimensional treatment planning. The verification of these plans poses even more challenge. In 1984, Gore et al., proposed that ferrous gel dosimeters combined with magnetic resonance imaging (MRI) could be used to measure three dimensional radiation dose distributions. Since then, there has been much interest in the development of gel dosimetry to aid the determination of three dimensional dose distributions during field arrangements. In this work, preparation and study of the MR characteristics of a PVA-Fx gel reported in the literature is presented

  11. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  12. Three-dimensional reconstruction of a radionuclide distribution within a medium of uniform coefficient of attenuation

    International Nuclear Information System (INIS)

    Diaz, J.E.

    1982-01-01

    The non-invasive, fully three-dimensional reconstruction of a radionuclide distribution is studied. The problem is considered in ideal form. Several solutions, ranging from the completely analytical to the completely graphical, are presented for both the non-attenuated and uniformly attenuated cases. A function is defined which, if enacted as a response to each detected photon, will yield, upon superposition, a faithful reconstruction of the radionuclide density. Two and three-dimensional forms of this functions are defined for both the non-attenuated and uniformly attenuated case

  13. Two-dimensional critical phenomena

    International Nuclear Information System (INIS)

    Saleur, H.

    1987-09-01

    Two dimensional critical systems are studied using transformation to free fields and conformal invariance methods. The relations between the two approaches are also studied. The analytical results obtained generally depend on universality hypotheses or on renormalization group trajectories which are not established rigorously, so numerical verifications, mainly using the transfer matrix approach, are presented. The exact determination of critical exponents; the partition functions of critical models on toruses; and results as the critical point is approached are discussed [fr

  14. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Three-dimensional space charge distribution measurement in electron beam irradiated PMMA

    International Nuclear Information System (INIS)

    Imaizumi, Yoichi; Suzuki, Ken; Tanaka, Yasuhiro; Takada, Tatsuo

    1996-01-01

    The localized space charge distribution in electron beam irradiated PMMA was investigated using pulsed electroacoustic method. Using a conventional space charge measurement system, the distribution only in the depth direction (Z) can be measured assuming the charges distributed uniformly in the horizontal (X-Y) plane. However, it is difficult to measure the distribution of space charge accumulated in small area. Therefore, we have developed the new system to measure the three-dimensional space charge distribution using pulsed electroacoustic method. The system has a small electrode with a diameter of 1mm and a motor-drive X-Y stage to move the sample. Using the data measured at many points, the three-dimensional distribution were obtained. To estimate the system performance, the electron beam irradiated PMMA was used. The electron beam was irradiated from transmission electron microscope (TEM). The depth of injected electron was controlled using the various metal masks. The measurement results were compared with theoretically calculated values of electron range. (author)

  17. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    Science.gov (United States)

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  18. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    Science.gov (United States)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  19. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  20. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  1. A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation

    Science.gov (United States)

    Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon

    2017-09-01

    Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.

  2. Interference patterns of Bose-condensed gases in a two-dimensional optical lattice

    International Nuclear Information System (INIS)

    Liu Shujuan; Xiong Hongwei; Xu Zhijun; Huang Guoxiang

    2003-01-01

    For a Bose-condensed gas confined in a magnetic trap and in a two-dimensional (2D) optical lattice, the non-uniform distribution of atoms in different lattice sites is considered based on the Gross-Pitaevskii equation. A propagator method is used to investigate the time evolution of 2D interference patterns after (i) only the optical lattice is switched off, and (ii) both the optical lattice and the magnetic trap are switched off. An analytical description on the motion of side peaks in the interference patterns is presented by using the density distribution in a momentum space

  3. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability

  4. Two-dimensional model of a freely expanding plasma

    International Nuclear Information System (INIS)

    Khalid, Q.

    1975-01-01

    The free expansion of an initially confined plasma is studied by the computer experiment technique. The research is an extension to two dimensions of earlier work on the free expansion of a collisionless plasma in one dimension. In the two-dimensional rod model, developed in this research, the plasma particles, electrons and ions are modeled as infinitely long line charges or rods. The line charges move freely in two dimensions normal to their parallel axes, subject only to a self-consistent electric field. Two approximations, the grid approximation and the periodic boundary condition are made in order to reduce the computation time. In the grid approximation, the space occupied by the plasma at a given time is divided into boxes. The particles are subject to an average electric field calculated for that box assuming that the total charge within each box is located at the center of the box. However, the motion of each particle is exactly followed. The periodic boundary condition allows us to consider only one-fourth of the total number of particles of the plasma, representing the remaining three-fourths of the particles as symmetrically placed images of those whose positions are calculated. This approximation follows from the expected azimuthal symmetry of the plasma. The dynamics of the expansion are analyzed in terms of average ion and electron positions, average velocities, oscillation frequencies and relative distribution of energy between thermal, flow and electric field energies. Comparison is made with previous calculations of one-dimensional models which employed plane, spherical or cylindrical sheets as charged particles. In order to analyze the effect of the grid approximation, the model is solved for two different grid sizes and for each grid size the plasma dynamics is determined. For the initial phase of expansion, the agreement for the two grid sizes is found to be good

  5. THE ANGULAR MOMENTUM OF MAGNETIZED MOLECULAR CLOUD CORES: A TWO-DIMENSIONAL-THREE-DIMENSIONAL COMPARISON

    International Nuclear Information System (INIS)

    Dib, Sami; Csengeri, Timea; Audit, Edouard; Hennebelle, Patrick; Pineda, Jaime E.; Goodman, Alyssa A.; Bontemps, Sylvain

    2010-01-01

    In this work, we present a detailed study of the rotational properties of magnetized and self-gravitating dense molecular cloud (MC) cores formed in a set of two very high resolution three-dimensional (3D) MC simulations with decaying turbulence. The simulations have been performed using the adaptative mesh refinement code RAMSES with an effective resolution of 4096 3 grid cells. One simulation represents a mildly magnetically supercritical cloud and the other a strongly magnetically supercritical cloud. We identify dense cores at a number of selected epochs in the simulations at two density thresholds which roughly mimic the excitation densities of the NH 3 (J - K) = (1,1) transition and the N 2 H + (1-0) emission line. A noticeable global difference between the two simulations is the core formation efficiency (CFE) of the high-density cores. In the strongly supercritical simulations, the CFE is 33% per unit free-fall time of the cloud (t ff,cl ), whereas in the mildly supercritical simulations this value goes down to ∼6 per unit t ff,cl . A comparison of the intrinsic specific angular momentum (j 3D ) distributions of the cores with the specific angular momentum derived using synthetic two-dimensional (2D) velocity maps of the cores (j 2D ) shows that the synthetic observations tend to overestimate the true value of the specific angular momentum by a factor of ∼8-10. We find that the distribution of the ratio j 3D /j 2D of the cores peaks at around ∼0.1. The origin of this discrepancy lies in the fact that contrary to the intrinsic determination of j which sums up the individual gas parcels' contributions to the angular momentum, the determination of the specific angular momentum using the standard observational procedure which is based on a measurement on the global velocity gradient under the hypothesis of uniform rotation smoothes out the complex fluctuations present in the 3D velocity field. Our results may well provide a natural explanation for the

  6. TRANSHEX, 2-D Thermal Neutron Flux Distribution from Epithermal Flux in Hexagonal Geometry

    International Nuclear Information System (INIS)

    Patrakka, E.

    1994-01-01

    1 - Description of program or function: TRANSHEX is a multigroup integral transport program that determines the thermal scalar flux distribution arising from a known epithermal flux in two- dimensional hexagonal geometry. 2 - Method of solution: The program solves the isotropic collision probability equations for a region-averaged scalar flux by an iterative method. Either a successive over-relaxation or an inner-outer iteration technique is applied. Flat flux collision probabilities between trigonal space regions with white boundary condition are utilized. The effect of epithermal flux is taken into consideration as a slowing-down source that is calculated for a given spatial distribution and 1/E energy dependence of the epithermal flux

  7. Constituent quarks as clusters in quark-gluon-parton model. [Total cross sections, probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1976-12-01

    We present a quark-gluon-parton model in which quark-partons and gluons make clusters corresponding to two or three constituent quarks (or anti-quarks) in the meson or in the baryon, respectively. We explicitly construct the constituent quark state (cluster), by employing the Kuti-Weisskopf theory and by requiring the scaling. The quark additivity of the hadronic total cross sections and the quark counting rules on the threshold powers of various distributions are satisfied. For small x (Feynman fraction), it is shown that the constituent quarks and quark-partons have quite different probability distributions. We apply our model to hadron-hadron inclusive reactions, and clarify that the fragmentation and the diffractive processes relate to the constituent quark distributions, while the processes in or near the central region are controlled by the quark-partons. Our model gives the reasonable interpretation for the experimental data and much improves the usual ''constituent interchange model'' result near and in the central region (x asymptotically equals x sub(T) asymptotically equals 0).

  8. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  9. Broken ergodicity in two-dimensional homogeneous magnetohydrodynamic turbulence

    International Nuclear Information System (INIS)

    Shebalin, John V.

    2010-01-01

    Two-dimensional (2D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3D) homogeneous MHD turbulence. These features include several ideal (i.e., nondissipative) invariants along with the phenomenon of broken ergodicity (defined as nonergodic behavior over a very long time). Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo. Recently, the origin of broken ergodicity in 3D MHD turbulence that is manifest in the lowest wavenumbers was found. Here, we study the origin of broken ergodicity in 2D MHD turbulence. It will be seen that broken ergodicity in ideal 2D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions. The origins of broken ergodicity in an ideal 2D homogeneous MHD turbulence are found through an eigenanalysis of the covariance matrices of the probability density function and by an examination of the associated entropy functional. When the values of ideal invariants are kept fixed and grid size increases, it will be shown that the energy in a few large modes remains constant, while the energy in any other mode is inversely proportional to grid size. Also, as grid size increases, we find that broken ergodicity becomes manifest at more and more wavenumbers.

  10. Two-dimensional capillary origami

    Energy Technology Data Exchange (ETDEWEB)

    Brubaker, N.D., E-mail: nbrubaker@math.arizona.edu; Lega, J., E-mail: lega@math.arizona.edu

    2016-01-08

    We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid. - Highlights: • Full solution set of the two-dimensional capillary origami problem. • Fluid does not necessarily wet the entire plate. • Global energy approach provides exact differential equations satisfied by minimizers. • Bifurcation diagrams highlight three different regimes. • Conditions for spontaneous encapsulation are identified.

  11. Two-dimensional capillary origami

    International Nuclear Information System (INIS)

    Brubaker, N.D.; Lega, J.

    2016-01-01

    We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid. - Highlights: • Full solution set of the two-dimensional capillary origami problem. • Fluid does not necessarily wet the entire plate. • Global energy approach provides exact differential equations satisfied by minimizers. • Bifurcation diagrams highlight three different regimes. • Conditions for spontaneous encapsulation are identified.

  12. Turbulent equipartitions in two dimensional drift convection

    International Nuclear Information System (INIS)

    Isichenko, M.B.; Yankov, V.V.

    1995-01-01

    Unlike the thermodynamic equipartition of energy in conservative systems, turbulent equipartitions (TEP) describe strongly non-equilibrium systems such as turbulent plasmas. In turbulent systems, energy is no longer a good invariant, but one can utilize the conservation of other quantities, such as adiabatic invariants, frozen-in magnetic flux, entropy, or combination thereof, in order to derive new, turbulent quasi-equilibria. These TEP equilibria assume various forms, but in general they sustain spatially inhomogeneous distributions of the usual thermodynamic quantities such as density or temperature. This mechanism explains the effects of particle and energy pinch in tokamaks. The analysis of the relaxed states caused by turbulent mixing is based on the existence of Lagrangian invariants (quantities constant along fluid-particle or other orbits). A turbulent equipartition corresponds to the spatially uniform distribution of relevant Lagrangian invariants. The existence of such turbulent equilibria is demonstrated in the simple model of two dimensional electrostatically turbulent plasma in an inhomogeneous magnetic field. The turbulence is prescribed, and the turbulent transport is assumed to be much stronger than the classical collisional transport. The simplicity of the model makes it possible to derive the equations describing the relaxation to the TEP state in several limits

  13. Two-dimensional black holes and non-commutative spaces

    International Nuclear Information System (INIS)

    Sadeghi, J.

    2008-01-01

    We study the effects of non-commutative spaces on two-dimensional black hole. The event horizon of two-dimensional black hole is obtained in non-commutative space up to second order of perturbative calculations. A lower limit for the non-commutativity parameter is also obtained. The observer in that limit in contrast to commutative case see two horizon

  14. Critical behavior of the two-dimensional first passage time

    International Nuclear Information System (INIS)

    Chayes, J.T.; Chayes, L.; Durrett, R.

    1986-01-01

    We study the two-dimensional first passage problem in which bonds have zero and unit passage times with probability p and 1-p, respectively. We provide that as the zero-time bonds approach the percolation threshold p/sub c/, the first passage time exhibits the same critical behavior as the correlation function of the underlying percolation problem. In particular, if the correlation length obeys ξ(p)--chemical bondp-p/sub c/chemical bond/sup -//sup v/, then the first passage time constant satisfies μ(p)--chemical bondp-p/sub c/chemical bond/sup v/. At p/sub c/, where it has been asserted that the first passage time from 0 to x scales as chemical bondxchemical bond to a power psi with 0< psi<1, we show that the passage times grow like log chemical bondxchemical bond, i.e., the fluid spreads exponentially rapidly

  15. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  16. Two-dimensional Navier-Stokes turbulence in bounded domains

    NARCIS (Netherlands)

    Clercx, H.J.H.; van Heijst, G.J.F.

    In this review we will discuss recent experimental and numerical results of quasi-two-dimensional decaying and forced Navier–Stokes turbulence in bounded domains. We will give a concise overview of developments in two-dimensional turbulence research, with emphasis on the progress made during the

  17. Two-dimensional Navier-Stokes turbulence in bounded domains

    NARCIS (Netherlands)

    Clercx, H.J.H.; Heijst, van G.J.F.

    2009-01-01

    In this review we will discuss recent experimental and numerical results of quasi-two-dimensional decaying and forced Navier–Stokes turbulence in bounded domains. We will give a concise overview of developments in two-dimensional turbulence research, with emphasis on the progress made during the

  18. Piezoelectricity in Two-Dimensional Materials

    KAUST Repository

    Wu, Tao; Zhang, Hua

    2015-01-01

    Powering up 2D materials: Recent experimental studies confirmed the existence of piezoelectricity - the conversion of mechanical stress into electricity - in two-dimensional single-layer MoS2 nanosheets. The results represent a milestone towards

  19. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  20. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  1. An investigation of two-dimensional, two-phase flow of steam in a cascade of turbine blading by the time-marching method

    International Nuclear Information System (INIS)

    Teymourtash, A. R.; Mahpeykar, M. R.

    2003-01-01

    During the course of expansion in turbines, the steam at first super cools and then nucleated to become a two-phase mixture. This is an area where greater understanding can lead to improved design. This paper describes a numerical method for the solution of two-dimensional two-phase flow of steam in a cascade of turbine blading; the unsteady euler equations governing the overall behaviour of the fluid are combined with equations describing droplet behaviour and treated by Jasmine fourth order runge Kutta time marching scheme which modified to allow for two-phase effects. The theoretical surface pressure distributions, droplet radii and contours of constant wetness fraction are presented and results are discussed in the light of knowledge of actual surface pressure distributions

  2. Solution of the two-dimensional spectral factorization problem

    Science.gov (United States)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  3. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    Science.gov (United States)

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity

  4. Matrix method for two-dimensional waveguide mode solution

    Science.gov (United States)

    Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee

    2018-05-01

    In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.

  5. Spatial probability aids visual stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Michael Druker

    2010-08-01

    Full Text Available We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming and if they were near the high probability hotspot (probability cuing. In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features.

  6. Development of Two-Dimensional NMR

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 11. Development of Two-Dimensional NMR: Strucure Determination of Biomolecules in Solution. Anil Kumar. General Article Volume 20 Issue 11 November 2015 pp 995-1002 ...

  7. ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES

    OpenAIRE

    Nikola Stefanović

    2007-01-01

    In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic ...

  8. Infrared magneto-spectroscopy of two-dimensional and three-dimensional massless fermions: A comparison

    Energy Technology Data Exchange (ETDEWEB)

    Orlita, M., E-mail: milan.orlita@lncmi.cnrs.fr [Laboratoire National des Champs Magnétiques Intenses, CNRS-UJF-UPS-INSA, 38042 Grenoble (France); Faculty of Mathematics and Physics, Charles University, Ke Karlovu 5, 121 16 Prague 2 (Czech Republic); Faugeras, C.; Barra, A.-L.; Martinez, G.; Potemski, M. [Laboratoire National des Champs Magnétiques Intenses, CNRS-UJF-UPS-INSA, 38042 Grenoble (France); Basko, D. M. [LPMMC UMR 5493, Université Grenoble 1/CNRS, B.P. 166, 38042 Grenoble (France); Zholudev, M. S. [Laboratoire Charles Coulomb (L2C), UMR CNRS 5221, GIS-TERALAB, Université Montpellier II, 34095 Montpellier (France); Institute for Physics of Microstructures, RAS, Nizhny Novgorod GSP-105 603950 (Russian Federation); Teppe, F.; Knap, W. [Laboratoire Charles Coulomb (L2C), UMR CNRS 5221, GIS-TERALAB, Université Montpellier II, 34095 Montpellier (France); Gavrilenko, V. I. [Institute for Physics of Microstructures, RAS, Nizhny Novgorod GSP-105 603950 (Russian Federation); Mikhailov, N. N.; Dvoretskii, S. A. [A.V. Rzhanov Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences, Novosibirsk 630090 (Russian Federation); Neugebauer, P. [Institut für Physikalische Chemie, Universität Stuttgart, Pfaffenwaldring 55, 70569 Stuttgart (Germany); Berger, C. [School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Institut Néel/CNRS-UJF BP 166, F-38042 Grenoble Cedex 9 (France); Heer, W. A. de [School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States)

    2015-03-21

    Here, we report on a magneto-optical study of two distinct systems hosting massless fermions—two-dimensional graphene and three-dimensional HgCdTe tuned to the zero band gap condition at the point of the semiconductor-to-semimetal topological transition. Both materials exhibit, in the quantum regime, a fairly rich magneto-optical response, which is composed from a series of intra- and interband inter-Landau level resonances with for massless fermions typical √(B) dependence. The impact of the system's dimensionality and of the strength of the spin-orbit interaction on the optical response is also discussed.

  9. One-dimensional versus two-dimensional electronic states in vicinal surfaces

    International Nuclear Information System (INIS)

    Ortega, J E; Ruiz-Oses, M; Cordon, J; Mugarza, A; Kuntze, J; Schiller, F

    2005-01-01

    Vicinal surfaces with periodic arrays of steps are among the simplest lateral nanostructures. In particular, noble metal surfaces vicinal to the (1 1 1) plane are excellent test systems to explore the basic electronic properties in one-dimensional superlattices by means of angular photoemission. These surfaces are characterized by strong emissions from free-electron-like surface states that scatter at step edges. Thereby, the two-dimensional surface state displays superlattice band folding and, depending on the step lattice constant d, it splits into one-dimensional quantum well levels. Here we use high-resolution, angle-resolved photoemission to analyse surface states in a variety of samples, in trying to illustrate the changes in surface state bands as a function of d

  10. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  11. Densis. Densimetric representation of two-dimensional matrices

    International Nuclear Information System (INIS)

    Los Arcos Merino, J.M.

    1978-01-01

    Densis is a Fortran V program which allows off-line control of a Calcomp digital plotter, to represent a two-dimensional matrix of numerical elements in the form of a variable shading intensity map in two colours. Each matrix element is associated to a square of a grid which is traced over by lines whose number is a function of the element value according to a selected scale. Program features, subroutine structure and running instructions, are described. Some typical results, for gamma-gamma coincidence experimental data and a sampled two-dimensional function, are indicated. (author)

  12. A three-dimensional finite element study on the stress distribution pattern of two prosthetic abutments for external hexagon implants.

    Science.gov (United States)

    Moreira, Wagner; Hermann, Caio; Pereira, Jucélio Tomás; Balbinoti, Jean Anacleto; Tiossi, Rodrigo

    2013-10-01

    The purpose of this study was to evaluate the mechanical behavior of two different straight prosthetic abutments (one- and two-piece) for external hex butt-joint connection implants using three-dimensional finite element analysis (3D-FEA). Two 3D-FEA models were designed, one for the two-piece prosthetic abutment (2 mm in height, two-piece mini-conical abutment, Neodent) and another one for the one-piece abutment (2 mm in height, Slim Fit one-piece mini-conical abutment, Neodent), with their corresponding screws and implants (Titamax Ti, 3.75 diameter by 13 mm in length, Neodent). The model simulated the single restoration of a lower premolar using data from a computerized tomography of a mandible. The preload (20 N) after torque application for installation of the abutment and an occlusal loading were simulated. The occlusal load was simulated using average physiological bite force and direction (114.6 N in the axial direction, 17.1 N in the lingual direction and 23.4 N toward the mesial at an angle of 75° to the occlusal plan). The regions with the highest von Mises stress results were at the bottom of the initial two threads of both prosthetic abutments that were tested. The one-piece prosthetic abutment presented a more homogeneous behavior of stress distribution when compared with the two-piece abutment. Under the simulated chewing loads, the von Mises stresses for both tested prosthetic-abutments were within the tensile strength values of the materials analyzed which thus supports the clinical use of both prosthetic abutments.

  13. Development of a Two-dimensional Thermohydraulic Hot Pool Model and ITS Effects on Reactivity Feedback during a UTOP in Liquid Metal Reactors

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Jeong, Hae Yong; Cho, Chung Ho; Kwon, Young Min; Ha, Kwi Seok; Chang, Won Pyo; Suk, Soo Dong; Hahn, Do Hee

    2009-01-01

    The existence of a large sodium pool in the KALIMER, a pool-type LMR developed by the Korea Atomic Energy Research Institute, plays an important role in reactor safety and operability because it determines the grace time for operators to cope with an abnormal event and to terminate a transient before reactor enters into an accident condition. A two-dimensional hot pool model has been developed and implemented in the SSC-K code, and has been successfully applied for the assessment of safety issues in the conceptual design of KALIMER and for the analysis of anticipated system transients. The other important models of the SSC-K code include a three-dimensional core thermal-hydraulic model, a reactivity model, a passive decay heat removal system model, and an intermediate heat transport system and steam generation system model. The capability of the developed two-dimensional hot pool model was evaluated with a comparison of the temperature distribution calculated with the CFX code. The predicted hot pool coolant temperature distributions obtained with the two-dimensional hot pool model agreed well with those predicted with the CFX code. Variations in the temperature distribution of the hot pool affect the reactivity feedback due to an expansion of the control rod drive line (CRDL) immersed in the pool. The existing CRDL reactivity model of the SSC-K code has been modified based on the detailed hot pool temperature distribution obtained with the two-dimensional pool model. An analysis of an unprotected transient over power with the modified reactivity model showed an improved negative reactivity feedback effect

  14. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  15. Charged fluid distribution in higher dimensional spheroidal space-time

    Indian Academy of Sciences (India)

    A general solution of Einstein field equations corresponding to a charged fluid distribution on the background of higher dimensional spheroidal space-time is obtained. The solution generates several known solutions for superdense star having spheroidal space-time geometry.

  16. Delineating Hydrofacies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics

    Energy Technology Data Exchange (ETDEWEB)

    Song, Xuehang [Florida State Univ., Tallahassee, FL (United States); Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Xingyuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ye, Ming [Florida State Univ., Tallahassee, FL (United States); Dai, Zhenxue [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data. Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.

  17. Resonance fluorescence based two- and three-dimensional atom localization

    Science.gov (United States)

    Wahab, Abdul; Rahmatullah; Qamar, Sajid

    2016-06-01

    Two- and three-dimensional atom localization in a two-level atom-field system via resonance fluorescence is suggested. For the two-dimensional localization, the atom interacts with two orthogonal standing-wave fields, whereas for the three-dimensional atom localization, the atom interacts with three orthogonal standing-wave fields. The effect of the detuning and phase shifts associated with the corresponding standing-wave fields is investigated. A precision enhancement in position measurement of the single atom can be noticed via the control of the detuning and phase shifts.

  18. Toward two-dimensional search engines

    International Nuclear Information System (INIS)

    Ermann, L; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank–CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed. (paper)

  19. Ionization of oriented targets by intense circularly polarized laser pulses: Imprints of orbital angular nodes in the two-dimensional momentum distribution

    DEFF Research Database (Denmark)

    Martiny, Christian; Abu-Samha, Mahmoud; Madsen, Lars Bojer

    2010-01-01

    We solve the three-dimensional time-dependent Schrödinger equation for a few-cycle circularly polarized femtosecond laser pulse that interacts with an oriented target exemplified by an argon atom, initially in a 3px or 3py state. The photoelectron momentum distributions show distinct signatures o...

  20. Subjective figure reversal in two- and three-dimensional perceptual space.

    Science.gov (United States)

    Radilová, J; Radil-Weiss, T

    1984-08-01

    A permanently illuminated pattern of Mach's truncated pyramid can be perceived according to the experimental instruction given, either as a three-dimensional reversible figure with spontaneously changing convex and concave interpretation (in one experiment), or as a two-dimensional reversible figure-ground pattern (in another experiment). The reversal rate was about twice as slow, without the subjects being aware of it, if it was perceived as a three-dimensional figure compared to the situation when it was perceived as two-dimensional. It may be hypothetized that in the three-dimensional case, the process of perception requires more sequential steps than in the two-dimensional one.