WorldWideScience

Sample records for flux probability distribution

  1. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  2. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  3. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  4. TRANSHEX, 2-D Thermal Neutron Flux Distribution from Epithermal Flux in Hexagonal Geometry

    International Nuclear Information System (INIS)

    Patrakka, E.

    1994-01-01

    1 - Description of program or function: TRANSHEX is a multigroup integral transport program that determines the thermal scalar flux distribution arising from a known epithermal flux in two- dimensional hexagonal geometry. 2 - Method of solution: The program solves the isotropic collision probability equations for a region-averaged scalar flux by an iterative method. Either a successive over-relaxation or an inner-outer iteration technique is applied. Flat flux collision probabilities between trigonal space regions with white boundary condition are utilized. The effect of epithermal flux is taken into consideration as a slowing-down source that is calculated for a given spatial distribution and 1/E energy dependence of the epithermal flux

  5. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  6. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  7. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  8. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  9. The measurements of thermal neutron flux distribution in a paraffin

    Indian Academy of Sciences (India)

    The term `thermal flux' implies a Maxwellian distribution of velocity and energy corresponding to the most probable velocity of 2200 ms-1 at 293.4 K. In order to measure the thermal neutron flux density, the foil activation method was used. Thermal neutron flux determination in paraffin phantom by counting the emitted rays of ...

  10. Improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1986-01-01

    An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)

  11. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  12. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  13. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  14. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  15. Neutron Flux Interpolation with Finite Element Method in the Nuclear Fuel Cell Calculation using Collision Probability Method

    International Nuclear Information System (INIS)

    Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti

    2010-01-01

    Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.

  16. SLAROM, Neutron Flux Distribution and Spectra in Lattice Cell

    International Nuclear Information System (INIS)

    Nakagawa, M.; Tsuchihashi, K.

    2002-01-01

    1 - Description of program or function: SLAROM solves the neutron integral transport equations to determine flux distribution and spectra in a lattice and calculates cell averaged effective cross sections. 2 - Method of solution: Collision probability method for cell calculation and 1D diffusion for core calculation. 3 - Restrictions on the complexity of the problem: Variable dimensions are used throughout the program so that computer core requirements depend on a variety of program parameters

  17. EL-2 reactor: Thermal neutron flux distribution

    International Nuclear Information System (INIS)

    Rousseau, A.; Genthon, J.P.

    1958-01-01

    The flux distribution of thermal neutrons in EL-2 reactor is studied. The reactor core and lattices are described as well as the experimental reactor facilities, in particular, the experimental channels and special facilities. The measurement shows that the thermal neutron flux increases in the central channel when enriched uranium is used in place of natural uranium. However the thermal neutron flux is not perturbed in the other reactor channels by the fuel modification. The macroscopic flux distribution is measured according the radial positioning of fuel rods. The longitudinal neutron flux distribution in a fuel rod is also measured and shows no difference between enriched and natural uranium fuel rods. In addition, measurements of the flux distribution have been effectuated for rods containing other material as steel or aluminium. The neutron flux distribution is also studied in all the experimental channels as well as in the thermal column. The determination of the distribution of the thermal neutron flux in all experimental facilities, the thermal column and the fuel channels has been made with a heavy water level of 1825 mm and is given for an operating power of 1000 kW. (M.P.)

  18. A method to calculate flux distribution in reactor systems containing materials with grain structure

    International Nuclear Information System (INIS)

    Stepanek, J.

    1980-01-01

    A method is proposed to compute the neutron flux spatial distribution in slab, spherical or cylindrical systems containing zones with close grain structure of material. Several different types of equally distributed particles embedded in the matrix material are allowed in one or more zones. The multi-energy group structure of the flux is considered. The collision probability method is used to compute the fluxes in the grains and in an ''effective'' part of the matrix material. Then the overall structure of the flux distribution in the zones with homogenized materials is determined using the DPN ''surface flux'' method. Both computations are connected using the balance equation during the outer iterations. The proposed method is written in the code SURCU-DH. Two testcases are computed and discussed. One testcase is the computation of the eigenvalue in simplified slab geometry of an LWR container of one zone with boral grains equally distributed in an aluminium matrix. The second is the computation of the eigenvalue in spherical geometry of the HTR pebble-bed cell with spherical particles embedded in a graphite matrix. The results are compared to those obtained by repeated use of the WIMS Code. (author)

  19. Flux continuity and probability conservation in complexified Bohmian mechanics

    International Nuclear Information System (INIS)

    Poirier, Bill

    2008-01-01

    Recent years have seen increased interest in complexified Bohmian mechanical trajectory calculations for quantum systems as both a pedagogical and computational tool. In the latter context, it is essential that trajectories satisfy probability conservation to ensure they are always guided to where they are most needed. We consider probability conservation for complexified Bohmian trajectories. The analysis relies on time-reversal symmetry considerations, leading to a generalized expression for the conjugation of wave functions of complexified variables. This in turn enables meaningful discussion of complexified flux continuity, which turns out not to be satisfied in general, though a related property is found to be true. The main conclusion, though, is that even under a weak interpretation, probability is not conserved along complex Bohmian trajectories

  20. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  1. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  2. Mixed analytical-stochastic simulation method for the recovery of a Brownian gradient source from probability fluxes to small windows.

    Science.gov (United States)

    Dobramysl, U; Holcman, D

    2018-02-15

    Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.

  3. EL-2 reactor: Thermal neutron flux distribution; EL-2: Repartition du flux de neutrons thermiques

    Energy Technology Data Exchange (ETDEWEB)

    Rousseau, A; Genthon, J P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1958-07-01

    The flux distribution of thermal neutrons in EL-2 reactor is studied. The reactor core and lattices are described as well as the experimental reactor facilities, in particular, the experimental channels and special facilities. The measurement shows that the thermal neutron flux increases in the central channel when enriched uranium is used in place of natural uranium. However the thermal neutron flux is not perturbed in the other reactor channels by the fuel modification. The macroscopic flux distribution is measured according the radial positioning of fuel rods. The longitudinal neutron flux distribution in a fuel rod is also measured and shows no difference between enriched and natural uranium fuel rods. In addition, measurements of the flux distribution have been effectuated for rods containing other material as steel or aluminium. The neutron flux distribution is also studied in all the experimental channels as well as in the thermal column. The determination of the distribution of the thermal neutron flux in all experimental facilities, the thermal column and the fuel channels has been made with a heavy water level of 1825 mm and is given for an operating power of 1000 kW. (M.P.)

  4. Neutron flux distribution forecasting device of reactor

    International Nuclear Information System (INIS)

    Uematsu, Hitoshi

    1991-01-01

    A neutron flux distribution is forecast by using current data obtained from a reactor. That is, the device of the present invention comprises (1) a neutron flux monitor disposed in various positions in the reactor, (2) a forecasting means for calculating and forecasting a one-dimensional neutron flux distribution relative to imaginable events by using data obtained from the neutron flux monitor and physical models, and (3) a display means for displaying the results forecast in the forecasting means to a reactor operation console. Since the forecast values for the one-dimensional neutron flux distribution relative to the imaginable events are calculated in the device of the present invention by using data obtained from the neutron flux monitor and the physical models, the data as a base of the calculation are new and the period for calculating the forecast values can be shortened. Accordingly, although there is a worry of providing some errors in the forecast values, they can be utilized sufficiently as reference data. As a result, the reactor can be operated more appropriately. (I.N.)

  5. Effect of axial heat flux distribution on CHF

    Energy Technology Data Exchange (ETDEWEB)

    Park, Cheol

    2000-10-01

    Previous investigations for the effect of axial heat flux distributions on CHF and the prediction methods are reviewed and summarized. A total of 856 CHF data in a tube with a non-uniform axial heat flux distribution has been compiled from the articles and analyzed using the 1995 Groeneveld look-up table. The results showed that two representative correction factors, K5 of the look-up table and Tongs F factor, can be applied to describe the axial heat flux distribution effect on CHF. However, they overpredict slightly the measured CHF, depending on the quality and flux peak shape. Hence, a corrected K5 factor, which accounts for the axial heat flux distribution effect is suggested to correct these trends. It predicted the CHF power for the compiled data with an average error of 1.5% and a standard deviation of 10.3%, and also provides a reasonable prediction of CHF locations.

  6. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  7. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  8. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  9. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  10. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  11. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  12. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  13. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  14. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  15. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  16. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  17. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  18. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  19. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  20. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    Science.gov (United States)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  1. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  2. The epithermal neutron-flux distribution in the reactor RA - Vinca

    International Nuclear Information System (INIS)

    Marinkov, V.; Bikit, I.; Martinc, R.; Veskovic, M.; Slivka, J.; Vaderna, S.

    1987-01-01

    The distribution of the epithermal neutron flux in the reactor RA - Vinca has been measured by means of Zr - activation detectors. In the channel VK-8 non-homogeneous flux distribution was observed (author) [sr

  3. Modeling of Drift Effects on Solar Tower Concentrated Flux Distributions

    Directory of Open Access Journals (Sweden)

    Luis O. Lara-Cerecedo

    2016-01-01

    Full Text Available A novel modeling tool for calculation of central receiver concentrated flux distributions is presented, which takes into account drift effects. This tool is based on a drift model that includes different geometrical error sources in a rigorous manner and on a simple analytic approximation for the individual flux distribution of a heliostat. The model is applied to a group of heliostats of a real field to obtain the resulting flux distribution and its variation along the day. The distributions differ strongly from those obtained assuming the ideal case without drift or a case with a Gaussian tracking error function. The time evolution of peak flux is also calculated to demonstrate the capabilities of the model. The evolution of this parameter also shows strong differences in comparison to the case without drift.

  4. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  5. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  6. Device for measuring neutron-flux distribution density

    International Nuclear Information System (INIS)

    Rozenbljum, N.D.; Mitelman, M.G.; Kononovich, A.A.; Kirsanov, V.S.; Zagadkin, V.A.

    1977-01-01

    An arrangement is described for measuring the distribution of neutron flux density over the height of a nuclear reactor core and which may be used for monitoring energy release or for detecting deviations of neutron flux from an optimal level so that subsequent balance can be achieved. It avoids mutual interference of detectors. Full constructional details are given. (UK)

  7. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  8. Calculations of Neutron Flux Distributions by Means of Integral Transport Methods

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I

    1967-05-15

    Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.

  9. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  10. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  11. A statistical model for horizontal mass flux of erodible soil

    International Nuclear Information System (INIS)

    Babiker, A.G.A.G.; Eltayeb, I.A.; Hassan, M.H.A.

    1986-11-01

    It is shown that the mass flux of erodible soil transported horizontally by a statistically distributed wind flow has a statistical distribution. Explicit expression for the probability density function, p.d.f., of the flux is derived for the case in which the wind speed has a Weibull distribution. The statistical distribution for a mass flux characterized by a generalized Bagnold formula is found to be Weibull for the case of zero threshold speed. Analytic and numerical values for the average horizontal mass flux of soil are obtained for various values of wind parameters, by evaluating the first moment of the flux density function. (author)

  12. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  13. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  14. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  15. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  16. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  17. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  18. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  19. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  20. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.

    Science.gov (United States)

    Chan, Stephanie C Y; Niv, Yael; Norman, Kenneth A

    2016-07-27

    The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. Our world is governed by hidden (latent) causes that we cannot observe, but which generate the observations we see. A range of high-level cognitive processes require inference of a probability distribution (or "belief distribution") over the possible latent causes that might be generating our current observations. This is true for reinforcement learning and decision making (where the latent cause comprises the true "state" of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or "schema"). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex, an area that has been separately implicated in the representations of both states and schemas. Copyright © 2016 the authors 0270-6474/16/367817-12$15.00/0.

  1. Investigating The Neutron Flux Distribution Of The Miniature Neutron Source Reactor MNSR Type

    International Nuclear Information System (INIS)

    Nguyen Hoang Hai; Do Quang Binh

    2011-01-01

    Neutron flux distribution is the important characteristic of nuclear reactor. In this article, four energy group neutron flux distributions of the miniature neutron source reactor MNSR type versus radial and axial directions are investigated in case the control rod is fully withdrawn. In addition, the effect of control rod positions on the thermal neutron flux distribution is also studied. The group constants for all reactor components are generated by the WIMSD code, and the neutron flux distributions are calculated by the CITATION code. The results show that the control rod positions only affect in the planning area for distribution in the region around the control rod. (author)

  2. Measurement of the thermal flux distribution in the IEA-R1 reactor

    International Nuclear Information System (INIS)

    Tangari, C.M.; Moreira, J.M.L.; Jerez, R.

    1986-01-01

    The knowledge of the neutron flux distribution in research reactors is important because it gives the power distribution over the core, and it provides better conditions to perform experiments and sample irradiations. The measured neutron flux distribution can also be of interest as a means of comparison for the calculational methods of reactor analysis currently in use at this institute. The thermal neutron flux distribution of the IEA-R1 reactor has been measured with the miniature chamber WL-23292. For carrying out the measurements, it was buit a guide system that permit the insertion of the mini-chamber i between the fuel of the fuel elements. It can be introduced in two diferent positions a fuel element and in each it spans 26 axial positions. With this guide system the thermal neutron flux distribution of the IEA-R1 nuclear reactor can be obtained in a fast and efficient manner. The element measured flux distribution shows clearly the effects of control rods and reflectors in the IEA-R1 reactor. The difficulties encountered during the measurements are mentioned with detail as well as the procedures adopteed to overcome them. (Author) [pt

  3. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  4. Measurement and simulation of thermal neutron flux distribution in the RTP core

    Science.gov (United States)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  5. Tropical Gravity Wave Momentum Fluxes and Latent Heating Distributions

    Science.gov (United States)

    Geller, Marvin A.; Zhou, Tiehan; Love, Peter T.

    2015-01-01

    Recent satellite determinations of global distributions of absolute gravity wave (GW) momentum fluxes in the lower stratosphere show maxima over the summer subtropical continents and little evidence of GW momentum fluxes associated with the intertropical convergence zone (ITCZ). This seems to be at odds with parameterizations forGWmomentum fluxes, where the source is a function of latent heating rates, which are largest in the region of the ITCZ in terms of monthly averages. The authors have examined global distributions of atmospheric latent heating, cloud-top-pressure altitudes, and lower-stratosphere absolute GW momentum fluxes and have found that monthly averages of the lower-stratosphere GW momentum fluxes more closely resemble the monthly mean cloud-top altitudes rather than the monthly mean rates of latent heating. These regions of highest cloud-top altitudes occur when rates of latent heating are largest on the time scale of cloud growth. This, plus previously published studies, suggests that convective sources for stratospheric GW momentum fluxes, being a function of the rate of latent heating, will require either a climate model to correctly model this rate of latent heating or some ad hoc adjustments to account for shortcomings in a climate model's land-sea differences in convective latent heating.

  6. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  7. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  8. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  9. Investigating the impact of uneven magnetic flux density distribution on core loss estimation

    DEFF Research Database (Denmark)

    Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren

    2017-01-01

    is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...

  10. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  11. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  12. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  13. Feynman quasi probability distribution for spin-(1/2), and its generalizations

    International Nuclear Information System (INIS)

    Colucci, M.

    1999-01-01

    It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects

  14. Influence of dose distribution homogeneity on the tumor control probability in heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Wen Xiaoqiong; Li Qiang; Zhou Guangming; Li Wenjian; Wei Zengquan

    2001-01-01

    In order to estimate the influence of the un-uniform dose distribution on the clinical treatment result, the Influence of dose distribution homogeneity on the tumor control probability was investigated. Basing on the formula deduced previously for survival fraction of cells irradiated by the un-uniform heavy-ion irradiation field and the theory of tumor control probability, the tumor control probability was calculated for a tumor mode exposed to different dose distribution homogeneity. The results show that the tumor control probability responding to the same total dose will decrease if the dose distribution homogeneity gets worse. In clinical treatment, the dose distribution homogeneity should be better than 95%

  15. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  16. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  17. Enhancement of magnetic flux distribution in a DC superconducting electric motor

    International Nuclear Information System (INIS)

    Hamid, N A; Ewe, L S; Chin, K M

    2013-01-01

    Most motor designs require an air gap between the rotor and stator to enable the armature to rotate freely. The interaction of magnetic flux from rotor and stator within the air gap will provide the thrust for rotational motion. Thus, the understanding of magnetic flux in the vicinity of the air gap is very important to mathematically calculate the magnetic flux generated in the area. In this work, a finite element analysis was employed to study the behavior of the magnetic flux in view of designing a synchronous DC superconducting electric motor. The analysis provides an ideal magnetic flux distribution within the components of the motor. From the flux plot analysis, it indicates that flux losses are mainly in the forms of leakage and fringe effect. The analysis also shows that the flux density is high at the area around the air gap and the rotor. The high flux density will provide a high force area that enables the rotor to rotate. In contrast, the other parts of the motor body do not show high flux density indicating low distribution of flux. Consequently, a bench top model of a DC superconducting motor was developed where by motor with a 2-pole type winding was chosen. Each field coil was designed with a racetrack-shaped double pancake wound using DI-BSCCO Bi-2223 superconducting tapes. The performance and energy efficiency of the superconducting motor was superior when compared to the conventional motor with similar capacity.

  18. Prediction of metabolic flux distribution from gene expression data based on the flux minimization principle.

    Directory of Open Access Journals (Sweden)

    Hyun-Seob Song

    Full Text Available Prediction of possible flux distributions in a metabolic network provides detailed phenotypic information that links metabolism to cellular physiology. To estimate metabolic steady-state fluxes, the most common approach is to solve a set of macroscopic mass balance equations subjected to stoichiometric constraints while attempting to optimize an assumed optimal objective function. This assumption is justifiable in specific cases but may be invalid when tested across different conditions, cell populations, or other organisms. With an aim to providing a more consistent and reliable prediction of flux distributions over a wide range of conditions, in this article we propose a framework that uses the flux minimization principle to predict active metabolic pathways from mRNA expression data. The proposed algorithm minimizes a weighted sum of flux magnitudes, while biomass production can be bounded to fit an ample range from very low to very high values according to the analyzed context. We have formulated the flux weights as a function of the corresponding enzyme reaction's gene expression value, enabling the creation of context-specific fluxes based on a generic metabolic network. In case studies of wild-type Saccharomyces cerevisiae, and wild-type and mutant Escherichia coli strains, our method achieved high prediction accuracy, as gauged by correlation coefficients and sums of squared error, with respect to the experimentally measured values. In contrast to other approaches, our method was able to provide quantitative predictions for both model organisms under a variety of conditions. Our approach requires no prior knowledge or assumption of a context-specific metabolic functionality and does not require trial-and-error parameter adjustments. Thus, our framework is of general applicability for modeling the transcription-dependent metabolism of bacteria and yeasts.

  19. Fast neutron fluxes distribution in Egyptian ilmenite concrete

    International Nuclear Information System (INIS)

    Megahed, R.M.; Abou El-Nasr, T.Z.; Bashter, I.I.

    1978-01-01

    This work is concerned with the study of the distribution of fast neutron fluxes in a new type of heavy concrete made from Egyptian ilmenite ores. The neutron source used was a collimated beam of reactor neutrons emitted from one of the horizontal channels of the ET-RR-1 reactor. Measurements were carried-out using phosphorous activation detectors. Iso-flux curves were represented which give directly the shape and thickness required to attenuate the emitted fast neutron flux to a certain value. The relaxation lengths were also evaluated from the measured data for both disc monodirectional source and infinite plane monodirectional source. The obtained values were compared with that calculated using the derived values of relative number densities and microscopic removal cross-sections of the different constituents. The obtained data show that ilmenite concrete attenuates fast neutron flux more strongly than ordinary concrete. A semiemperical formula was derived to calculate the fast neutron flux at different thicknesses along the beam axis. Another semiemperical formula was also derived to calculate the fast neutron flux in ordinary concrete along the beam axis using the corresponding value in ilmenite concrete

  20. Flux distribution in single phase, Si-Fe, wound transformer cores

    International Nuclear Information System (INIS)

    Loizos, George; Kefalas, Themistoklis; Kladas, Antonios; Souflaris, Thanassis; Paparigas, Dimitris

    2008-01-01

    This paper shows experimental results of longitudinal flux density and its harmonics at the limb, the yoke and the corner as well as normal flux in the step lap joint of a single phase, Si-Fe, wound transformer core. Results show that the flux density as well as the harmonics content is higher in the inner (window) side of the core and reduces gradually towards the outer side. Variations of flux density distribution between the limb and the corner or the yoke of the core were observed. A full record of normal flux around the step lap region of the model core was also obtained. Longitudinal and normal flux findings will enable the development of more accurate numerical models that describe the magnetic behavior of magnetic cores

  1. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  2. The effects of off-center pellets on the temperature distribution and the heat flux distribution of fuel rods in nuclear reactors

    International Nuclear Information System (INIS)

    Peng Muzhang; Xing Jianhua

    1986-01-01

    This paper analyzes the effects of off-center pellets on the steady state temperature distribution and heat flux distribution of fuel rods in the nuclear reactors, and derives the dimensionless temperature distribution relationships and the dimensionless heat flux distribution relationship from the fuel rods with off-center pellets. The calculated results show that the effects of off-center will result in not only deviations of the highest temperature placement in the fuel pellets, but also the circumferentially nonuniform distributions of the temperatures and heat fluxes of the fuel rod surfaces

  3. A simulation model of distributions of radiational flux at leaf surfaces in crowns of fruit trees

    International Nuclear Information System (INIS)

    Yamamoto, T.

    1988-01-01

    A computer-model was constructed for estimating distributions with time of radiational fluxes at leaf surfaces throughout fruit tree canopies in which leaves did not distribute uniformely in three dimensional space. Several assumptions were set up to construct the model for approximation of using solid geometry. For irregular distribution of leaf area in three dimensional space data were used in the simulation as number of leaves per internal cubic bloc of a cubic grid (n-divided per side). Several main parameters used were peculiar to fruit species which contain parameters (λ, ν) of Beta function to calculate both probability density function of leaf area distribution with respect to inclination angle and leaf extinction coefficient for parallel beam by leaves parameters (A, R i ) to calculate stem extinction coefficient for parallel beam, and parameters (D i ) to calculate leaf extinction coefficient of downward transmission and downward reflection. With these data and parameters solid geometry and Lambert-Beer's law constituted this model

  4. THE X-RAY FLUX DISTRIBUTION OF SAGITTARIUS A* AS SEEN BY CHANDRA

    International Nuclear Information System (INIS)

    Neilsen, J.; Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands))" data-affiliation=" (Astronomical Institute, Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands))" >Markoff, S.; Nowak, M. A.; Baganoff, F. K.; Dexter, J.; Witzel, G.; Barrière, N.; Li, Y.; Degenaar, N.; Fragile, P. C.; Gammie, C.; Goldwurm, A.; Grosso, N.; Haggard, D.

    2015-01-01

    We present a statistical analysis of the X-ray flux distribution of Sgr A* from the Chandra X-Ray Observatory's 3 Ms Sgr A* X-ray Visionary Project in 2012. Our analysis indicates that the observed X-ray flux distribution can be decomposed into a steady quiescent component, represented by a Poisson process with rate Q = (5.24 ± 0.08) × 10 –3  counts s –1 , and a variable component, represented by a power law process (dN/dF∝F –ξ , ξ=1.92 −0.02 +0.03 ). This slope matches our recently reported distribution of flare luminosities. The variability may also be described by a log-normal process with a median unabsorbed 2-8 keV flux of 1.8 −0.6 +0.8 ×10 −14  erg s –1  cm –2 and a shape parameter σ = 2.4 ± 0.2, but the power law provides a superior description of the data. In this decomposition of the flux distribution, all of the intrinsic X-ray variability of Sgr A* (spanning at least three orders of magnitude in flux) can be attributed to flaring activity, likely in the inner accretion flow. We confirm that at the faint end, the variable component contributes ∼10% of the apparent quiescent flux, as previously indicated by our statistical analysis of X-ray flares in these Chandra observations. Our flux distribution provides a new and important observational constraint on theoretical models of Sgr A*, and we use simple radiation models to explore the extent to which a statistical comparison of the X-ray and infrared can provide insights into the physics of the X-ray emission mechanism

  5. THE X-RAY FLUX DISTRIBUTION OF SAGITTARIUS A* AS SEEN BY CHANDRA

    Energy Technology Data Exchange (ETDEWEB)

    Neilsen, J. [Department of Astronomy, Boston University, Boston, MA 02215 (United States); Markoff, S. [Astronomical Institute, " Anton Pannekoek," University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Nowak, M. A.; Baganoff, F. K. [MIT Kavli Institute for Astrophysics and Space Research, Cambridge, MA 02139 (United States); Dexter, J. [Department of Astronomy, Hearst Field Annex, University of California, Berkeley, CA 94720-3411 (United States); Witzel, G. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Barrière, N. [Space Sciences Laboratory, 7 Gauss Way, University of California, Berkeley, CA 94720-7450 (United States); Li, Y. [Department of Astronomy and Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen, Fujian 361005 (China); Degenaar, N. [Institute of Astronomy, University of Cambridge, Cambridge, CB3 OHA (United Kingdom); Fragile, P. C. [Department of Physics and Astronomy, College of Charleston, Charleston, SC 29424 (United States); Gammie, C. [Department of Astronomy, University of Illinois Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Goldwurm, A. [AstroParticule et Cosmologie (APC), Université Paris 7 Denis Diderot, F-75205 Paris cedex 13 (France); Grosso, N. [Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); Haggard, D., E-mail: jneilsen@space.mit.edu [Department of Physics and Astronomy, AC# 2244, Amherst College, Amherst, MA 01002 (United States)

    2015-02-01

    We present a statistical analysis of the X-ray flux distribution of Sgr A* from the Chandra X-Ray Observatory's 3 Ms Sgr A* X-ray Visionary Project in 2012. Our analysis indicates that the observed X-ray flux distribution can be decomposed into a steady quiescent component, represented by a Poisson process with rate Q = (5.24 ± 0.08) × 10{sup –3} counts s{sup –1}, and a variable component, represented by a power law process (dN/dF∝F {sup –ξ}, ξ=1.92{sub −0.02}{sup +0.03}). This slope matches our recently reported distribution of flare luminosities. The variability may also be described by a log-normal process with a median unabsorbed 2-8 keV flux of 1.8{sub −0.6}{sup +0.8}×10{sup −14} erg s{sup –1} cm{sup –2} and a shape parameter σ = 2.4 ± 0.2, but the power law provides a superior description of the data. In this decomposition of the flux distribution, all of the intrinsic X-ray variability of Sgr A* (spanning at least three orders of magnitude in flux) can be attributed to flaring activity, likely in the inner accretion flow. We confirm that at the faint end, the variable component contributes ∼10% of the apparent quiescent flux, as previously indicated by our statistical analysis of X-ray flares in these Chandra observations. Our flux distribution provides a new and important observational constraint on theoretical models of Sgr A*, and we use simple radiation models to explore the extent to which a statistical comparison of the X-ray and infrared can provide insights into the physics of the X-ray emission mechanism.

  6. On the properties of collision probability integrals in annular geometry-II evaluation

    International Nuclear Information System (INIS)

    Milgram, M.S.; Sly, K.N.

    1979-02-01

    To calculate neutron flux distributions in infinitely long annular regions, the inner-outer and outer-outer transmission probabilities psup(io) and psup(oo) are required. Efficient algorithms for the computation of these probabilities as functions of two variables (the ratio of inner/outer radii kappa, and cross-section Σ) are given for 0 -5 . (author)

  7. Generalization of Poisson distribution for the case of changing probability of consequential events

    International Nuclear Information System (INIS)

    Kushnirenko, E.

    1995-01-01

    The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation

  8. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  9. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

    International Nuclear Information System (INIS)

    Chu Guanquan; Wang Jinhui

    2012-01-01

    Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

  10. 3-D flux distribution and criticality calculation of TRIGA Mark-II

    International Nuclear Information System (INIS)

    Can, B.

    1982-01-01

    In this work, the static calculation of the (I.T.U. TRIGA Mark-II) flux distribution has been made. The three dimensional, r-θ-z, representation of the core has been used. In this representation, for different configuration, the flux distribution has been calculated depending on two group theory. The thermal-hydraulics, the poisoning effects have been ignored. The calculations have been made by using the three dimensional and multigroup code CAN. (author)

  11. Estimating biological elementary flux modes that decompose a flux distribution by the minimal branching property

    DEFF Research Database (Denmark)

    Chan, Siu Hung Joshua; Solem, Christian; Jensen, Peter Ruhdal

    2014-01-01

    biologically feasible EFMs by considering their graphical properties. A previous study on the transcriptional regulation of metabolic genes found that distinct branches at a branch point metabolite usually belong to distinct metabolic pathways. This suggests an intuitive property of biologically feasible EFMs......, i.e. minimal branching. RESULTS: We developed the concept of minimal branching EFM and derived the minimal branching decomposition (MBD) to decompose flux distributions. Testing in the core Escherichia coli metabolic network indicated that MBD can distinguish branches at branch points and greatly...... knowledge, which facilitates interpretation. Comparison of the methods applied to a complex flux distribution in Lactococcus lactis similarly showed the advantages of MBD. The minimal branching EFM concept underlying MBD should be useful in other applications....

  12. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  13. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  14. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  15. The effect of the advanced drift-flux model of ASSERT-PV on critical heat flux, flow and void distributions in CANDU bundle subchannels

    International Nuclear Information System (INIS)

    Hammouda, N.; Rao, Y.F.

    2017-01-01

    Highlights: • Presentation of the “advanced” drift-flux model of the subchannel code ASSERT-PV. • Study the effect of the drift-flux model of ASSERT on CHF and flow distribution. • Quantify model component effects with flow, quality and dryout power measurements. - Abstract: This paper studies the effect of the drift flux model of the subchannel code ASSERT-PV on critical heat flux (CHF), void fraction and flow distribution across fuel bundles. Numerical experiments and comparison against measurements were performed to examine the trends and relative behaviour of the different components of the model under various flow conditions. The drift flux model of ASSERT-PV is composed of three components: (a) the lateral component or diversion cross-flow, caused by pressure difference between connected subchannels, (b) the turbulent diffusion component or the turbulent mixing through gaps of subchannels, caused by instantaneous turbulent fluctuations or flow oscillations, and (c) the void drift component that occurs due to the two-phase tendency toward a preferred distribution. This study shows that the drift flux model has a significant impact on CHF, void fraction and flow distribution predictions. The lateral component of the drift flux model has a stronger effect on CHF predictions than the axial component, especially for horizontal flow. Predictions of CHF, void fraction and flow distributions are most sensitive to the turbulent diffusion component of the model, followed by the void drift component. Buoyancy drift can be significant, but it does not have as much influence on CHF and flow distribution as the turbulent diffusion and void drift.

  16. Multilevel power distribution synthesis for a movable flux mapping system

    International Nuclear Information System (INIS)

    Bollacasa, D.; Terney, W.B.; Vincent, G.F.; Dziadosz, D.; Schleicher, T.

    1992-01-01

    A Computer Software package has been developed to support the synthesis of the 3-dimensional power distribution from detector signals from a movable flux mapping system. The power distribution synthesis is based on methodology developed for fixed incore detectors. The full core solution effectively couples all assemblies in the core whether they are instrumented or not. The solution is not subject to approximations for the treatment of assemblies where a measurement cannot be made and provides an accurate representation of axial variations which may be induced by axial blankets, burnable absorber cut back regions and axially zoned flux suppression rods

  17. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  18. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  19. Calculation of flux density distribution on irradiation field of electron accelerator

    International Nuclear Information System (INIS)

    Tanaka, Ryuichi

    1977-03-01

    The simple equation of flux density distribution in the irradiation field of an ordinary electron accelerator is a function of the physical parameters concerning electron irradiation. Calculation is based on the mean square scattering angle derived from a simple multiple scattering theory, with the correction factors of air scattering, beam scanning and number transmission coefficient. The flux density distribution was measured by charge absorption in a graphite target set in the air. For the calculated mean square scattering angles of 0.089-0.29, the values of calculation agree with those by experiment within about 10% except at large scattering angles. The method is applicable to dose evaluation of ordinary electron accelerators and design of various irradiators for radiation chemical reaction. Applicability of the simple multiple scattering theory in calculation of the scattered flux density and periodical variation of the flux density of scanning beam are also described. (auth.)

  20. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  1. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  2. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  3. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  4. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  5. Probability of fracture and life extension estimate of the high-flux isotope reactor vessel

    International Nuclear Information System (INIS)

    Chang, S.J.

    1998-01-01

    The state of the vessel steel embrittlement as a result of neutron irradiation can be measured by its increase in ductile-brittle transition temperature (DBTT) for fracture, often denoted by RT NDT for carbon steel. This transition temperature can be calibrated by the drop-weight test and, sometimes, by the Charpy impact test. The life extension for the high-flux isotope reactor (HFIR) vessel is calculated by using the method of fracture mechanics that is incorporated with the effect of the DBTT change. The failure probability of the HFIR vessel is limited as the life of the vessel by the reactor core melt probability of 10 -4 . The operating safety of the reactor is ensured by periodic hydrostatic pressure test (hydrotest). The hydrotest is performed in order to determine a safe vessel static pressure. The fracture probability as a result of the hydrostatic pressure test is calculated and is used to determine the life of the vessel. Failure to perform hydrotest imposes the limit on the life of the vessel. The conventional method of fracture probability calculations such as that used by the NRC-sponsored PRAISE CODE and the FAVOR CODE developed in this Laboratory are based on the Monte Carlo simulation. Heavy computations are required. An alternative method of fracture probability calculation by direct probability integration is developed in this paper. The present approach offers simple and expedient ways to obtain numerical results without losing any generality. In this paper, numerical results on (1) the probability of vessel fracture, (2) the hydrotest time interval, and (3) the hydrotest pressure as a result of the DBTT increase are obtained

  6. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  7. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  8. Probability distribution of long-run indiscriminate felling of trees in ...

    African Journals Online (AJOL)

    The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

  9. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  10. INDIAN POINT REACTOR REACTIVITY AND FLUX DISTRIBUTION MEASUREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Batch, M. L.; Fischer, F. E.

    1963-11-15

    The reactivity of the Indian Point core was measured near zero reactivity at various shim and control rod patterns. Flux distribution measurements were also made, and the results are expressed in terms of power peaking factors and normalized detector response during rod withdrawal. (D.L.C.)

  11. Poloidal and toroidal heat flux distribution in the CCT tokamak

    International Nuclear Information System (INIS)

    Brown, M.L.; Dhir, V.K.; Taylor, R.J.

    1990-01-01

    Plasma heat flux to the Faraday shield panels of the UCLA Continuous Current Tokamak (CCT) has been measured calorimetrically in order to identify the dominant parameters affecting the spatial distribution of heat deposition. Three heating methods were investigated: audio frequency discharge cleaning, RF heating, and AC ohmic. Significant poloidal asymmetry is present in the heat flux distribution. On the average, the outer panels received 25-30% greater heat flux than the inner ones, with the ratio of maximum to minimum values attaining a difference of more than a factor of 2. As a diagnostic experiment the current to a selected toroidal field coil was reduced in order to locally deflect the toroidal field lines outward in a ripple-like fashion. Greatly enhanced heat deposition (up to a factor of 4) was observed at this location on the outside Faraday panels. The enhancement was greatest for conditions of low toroidal field and low neutral pressure, leading to low plasma densities, for which Coulomb collisions are the smallest. An exponential model based on a heat flux e-folding length describes the experimentally found localization of thermal energy quite adequately. (orig.)

  12. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  13. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  14. A diffusion-theoretical method to calculate the neutron flux distribution in multisphere configurations

    International Nuclear Information System (INIS)

    Schuerrer, F.

    1980-01-01

    For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de

  15. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  16. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  17. Quantum Fourier transform, Heisenberg groups and quasi-probability distributions

    International Nuclear Information System (INIS)

    Patra, Manas K; Braunstein, Samuel L

    2011-01-01

    This paper aims to explore the inherent connection between Heisenberg groups, quantum Fourier transform (QFT) and (quasi-probability) distribution functions. Distribution functions for continuous and finite quantum systems are examined from three perspectives and all of them lead to Weyl-Gabor-Heisenberg groups. The QFT appears as the intertwining operator of two equivalent representations arising out of an automorphism of the group. Distribution functions correspond to certain distinguished sets in the group algebra. The marginal properties of a particular class of distribution functions (Wigner distributions) arise from a class of automorphisms of the group algebra of the Heisenberg group. We then study the reconstruction of the Wigner function from the marginal distributions via inverse Radon transform giving explicit formulae. We consider some applications of our approach to quantum information processing and quantum process tomography.

  18. Sonar gas flux estimation by bubble insonification: application to methane bubble flux from seep areas in the outer Laptev Sea

    Science.gov (United States)

    Leifer, Ira; Chernykh, Denis; Shakhova, Natalia; Semiletov, Igor

    2017-06-01

    Sonar surveys provide an effective mechanism for mapping seabed methane flux emissions, with Arctic submerged permafrost seepage having great potential to significantly affect climate. We created in situ engineered bubble plumes from 40 m depth with fluxes spanning 0.019 to 1.1 L s-1 to derive the in situ calibration curve (Q(σ)). These nonlinear curves related flux (Q) to sonar return (σ) for a multibeam echosounder (MBES) and a single-beam echosounder (SBES) for a range of depths. The analysis demonstrated significant multiple bubble acoustic scattering - precluding the use of a theoretical approach to derive Q(σ) from the product of the bubble σ(r) and the bubble size distribution where r is bubble radius. The bubble plume σ occurrence probability distribution function (Ψ(σ)) with respect to Q found Ψ(σ) for weak σ well described by a power law that likely correlated with small-bubble dispersion and was strongly depth dependent. Ψ(σ) for strong σ was largely depth independent, consistent with bubble plume behavior where large bubbles in a plume remain in a focused core. Ψ(σ) was bimodal for all but the weakest plumes. Q(σ) was applied to sonar observations of natural arctic Laptev Sea seepage after accounting for volumetric change with numerical bubble plume simulations. Simulations addressed different depths and gases between calibration and seep plumes. Total mass fluxes (Qm) were 5.56, 42.73, and 4.88 mmol s-1 for MBES data with good to reasonable agreement (4-37 %) between the SBES and MBES systems. The seepage flux occurrence probability distribution function (Ψ(Q)) was bimodal, with weak Ψ(Q) in each seep area well described by a power law, suggesting primarily minor bubble plumes. The seepage-mapped spatial patterns suggested subsurface geologic control attributing methane fluxes to the current state of subsea permafrost.

  19. The flow distribution in the parallel tubes of the cavity receiver under variable heat flux

    International Nuclear Information System (INIS)

    Hao, Yun; Wang, Yueshe; Hu, Tian

    2016-01-01

    Highlights: • An experimental loop is built to find the flow distribution in the parallel tubes. • With the concentration of heat flux, two-phase flow makes distribution more uneven. • The total flow rate is chosen appropriately for a wider heat flux distribution. • A suitable system pressure is essential for the optimization of flow distribution. - Abstract: As an optical component of tower solar thermal power station, the heliostat mirror reflects sunlight to one point of the heated surface in the solar cavity receiver, called as one-point focusing system. The radiation heat flux concentrated in the cavity receiver is always non-uniform temporally and spatially, which may lead to extremely local over-heat on the receiver evaporation panels. In this paper, an electrical heated evaporating experimental loop, including five parallel vertical tubes, is set up to evaluate the hydrodynamic characteristics of evaporation panels in a solar cavity receiver under various non-uniform heat flux. The influence of the heat flux concentration ratio, total flow rate, and system pressure on the flow distribution of parallel tubes is discussed. It is found that the flow distribution becomes significantly worse with the increase of heat flux and concentration ratio; and as the system pressure decreased, the flow distribution is improved. It is extremely important to obtain these interesting findings for the safe and stable operation of solar cavity receiver, and can also provide valuable references for the design and optimization of operating parameters solar tower power station system.

  20. New family of probability distributions with applications to Monte Carlo studies

    International Nuclear Information System (INIS)

    Johnson, M.E.; Tietjen, G.L.; Beckman, R.J.

    1980-01-01

    A new probability distribution is presented that offers considerable potential for providing stochastic inputs to Monte Carlo simulation studies. The distribution includes the exponential power family as a special case. An efficient computational strategy is proposed for random variate generation. An example for testing the hypothesis of unit variance illustrates the advantages of the proposed distribution

  1. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  2. Inverse identification of intensity distributions from multiple flux maps in concentrating solar applications

    International Nuclear Information System (INIS)

    Erickson, Ben; Petrasch, Jörg

    2012-01-01

    Radiative flux measurements at the focal plane of solar concentrators are typically performed using digital cameras in conjunction with Lambertian targets. To accurately predict flux distributions on arbitrary receiver geometries directional information about the radiation is required. Currently, the directional characteristics of solar concentrating systems are predicted via ray tracing simulations. No direct experimental technique to determine intensities of concentrating solar systems is available. In the current paper, multiple parallel flux measurements at varying distances from the focal plane together with a linear inverse method and Tikhonov regularization are used to identify the directional and spatial intensity distribution at the solution plane. The directional binning feature of an in-house Monte Carlo ray tracing program is used to provide a reference solution. The method has been successfully applied to two-dimensional concentrators, namely parabolic troughs and elliptical troughs using forward Monte Carlo ray tracing simulations that provide the flux maps as well as consistent, associated intensity distribution for validation. In the two-dimensional case, intensity distributions obtained from the inverse method approach the Monte Carlo forward solution. In contrast, the method has not been successful for three dimensional and circular symmetric concentrator geometries.

  3. Flux distribution measurements in the Bruce A unit 1 reactor

    International Nuclear Information System (INIS)

    Okazaki, A.; Kettner, D.A.; Mohindra, V.K.

    1977-07-01

    Flux distribution measurements were made by copper wire activation during low power commissioning of the unit 1 reactor of the Bruce A generating station. The distribution was measured along one diameter near the axial and horizontal midplanes of the reactor core. The activity distribution along the copper wire was measured by wire scanners with NaI detectors. The experiments were made for five configurations of reactivity control mechanisms. (author)

  4. An analytical transport theory method for calculating flux distribution in slab cells

    International Nuclear Information System (INIS)

    Abdel Krim, M.S.

    2001-01-01

    A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)

  5. Critical sizes and flux distributions in the shut down pile; Tailles critiques et cartes de flux a froid

    Energy Technology Data Exchange (ETDEWEB)

    Banchereau, A; Berthier, P; Genthon, J P; Gourdon, C; Lattes, R; Martelly, J; Mazancourt, R de; Portes, L; Sagot, M; Schmitt, A P; Tanguy, P; Teste du Bailler, A; Veyssiere, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1957-07-01

    An important part of the experiments carried out on the reactor G1 during a period of shut-down has consisted in determinations of critical sizes, and measurements of flux distribution by irradiations of detectors. This report deals with the following points: 1- Critical sizes of the flat pile, the long pile and the uranium-thorium pile. 2- Flux charts of the same piles, and study of an exponential experiment. 3- Determination of the slit effect. 4- Calculation of the anisotropy of the lattice. 5- Description of the experimental apparatus of the irradiation measurements. (author) [French] Une part importante des experiences a froid effectuees sur le reacteur G1 a consiste en des determinations de tailles critiques et des mesures de distributions de flux par irradiations de detecteurs. Le present rapport traite les points suivants: 1- Tailles critiques de la pile plate, de la pile longue, de la pile a uranium-thorium. 2 - Cartes de flux des memes piles et etude d'une experience exponentielle. 3 - Determination de l'effet de fente. 4 - Calcul de l'anisotropie du reseau. 5 - Description de l'appareillage experimental des mesures d'irradiations. (auteur)

  6. Critical sizes and flux distributions in the shut down pile; Tailles critiques et cartes de flux a froid

    Energy Technology Data Exchange (ETDEWEB)

    Banchereau, A.; Berthier, P.; Genthon, J.P.; Gourdon, C.; Lattes, R.; Martelly, J.; Mazancourt, R. de; Portes, L.; Sagot, M.; Schmitt, A.P.; Tanguy, P.; Teste du Bailler, A.; Veyssiere, A. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1957-07-01

    An important part of the experiments carried out on the reactor G1 during a period of shut-down has consisted in determinations of critical sizes, and measurements of flux distribution by irradiations of detectors. This report deals with the following points: 1- Critical sizes of the flat pile, the long pile and the uranium-thorium pile. 2- Flux charts of the same piles, and study of an exponential experiment. 3- Determination of the slit effect. 4- Calculation of the anisotropy of the lattice. 5- Description of the experimental apparatus of the irradiation measurements. (author) [French] Une part importante des experiences a froid effectuees sur le reacteur G1 a consiste en des determinations de tailles critiques et des mesures de distributions de flux par irradiations de detecteurs. Le present rapport traite les points suivants: 1- Tailles critiques de la pile plate, de la pile longue, de la pile a uranium-thorium. 2 - Cartes de flux des memes piles et etude d'une experience exponentielle. 3 - Determination de l'effet de fente. 4 - Calcul de l'anisotropie du reseau. 5 - Description de l'appareillage experimental des mesures d'irradiations. (auteur)

  7. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  8. Surface flux density distribution characteristics of bulk high-T c superconductor in external magnetic field

    International Nuclear Information System (INIS)

    Nishikawa, H.; Torii, S.; Yuasa, K.

    2005-01-01

    This paper describes the measured results of the two-dimensional flux density distribution of a YBCO bulk under applied AC magnetic fields with various frequency. Melt-processed oxide superconductors have been developed in order to obtain strong pinning forces. Various electric mechanical systems or magnetic levitation systems use those superconductors. The major problem is that cracks occur because the bulk superconductors are brittle. The bulk may break in magnetizing process after cracks make superconducting state instable. The trapped flux density and the permanent current characteristics of bulk superconductors have been analyzed, so as to examine the magnetizing processes or superconducting states of the bulk. In those studies, the two-dimensional surface flux density distributions of the bulk in static fields are discussed. On the other hand, the distributions in dynamic fields are little discussed. We attempted to examine the states of the bulk in the dynamic fields, and made a unique experimental device which has movable sensors synchronized with AC applied fields. As a result, the two-dimensional distributions in the dynamic fields are acquired by recombining the one-dimensional distributions. The dynamic states of the flux of the bulk and the influences of directions of cracks are observed from the distributions. In addition, a new method for measuring two-dimensional flux density distribution under dynamic magnetic fields is suggested

  9. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  10. The effect of an iron plug on the neutron flux distributions in water

    International Nuclear Information System (INIS)

    Lotfi, A.; Maayouf, R.M.A.; Megahid, R.

    1978-01-01

    This work is concerned with studying both fast and thermal neutron fluxes distribution in water and its perturbation due to the presence of a cylindrical iron plug. The measurements were carried out using a collimated neutron beam emitted from one of the horizontal channels of the ET-RR-1 reactor. The fast neutron fluxes were measured using phosphorus activation detectors, while the thermal neutron ones were measured using fission fragment track detectors from glass. The results show that the presence of an iron plug causes a remarkable change in the intensities of both the fast and thermal neutron fluxes distribution in the water medium surrounding the iron plug. The flux intensities at the peaks, formed beyond the iron plug in case of thermal neutrons, are also compared with values calculated using the available emperical formula

  11. Surface flux density distribution characteristics of bulk high-Tc superconductor in external magnetic field

    International Nuclear Information System (INIS)

    Torii, S.; Yuasa, K.

    2004-01-01

    Various magnetic levitation systems using oxide superconductors are developed as strong pinning forces are obtained in melt-processed bulk. However, the trapped flux of superconductor is moved by flux creep and fluctuating magnetic field. Therefore, to examine the internal condition of superconductor, the authors measure the dynamic surface flux density distribution of YBCO bulk. Flux density measurement system has a structure with the air-core coil and the Hall sensors. Ten Hall sensors are arranged in series. The YBCO bulk, which has 25 mm diameter and 13 mm thickness, is field cooled by liquid nitrogen. After that, magnetic field is changed by the air-core coil. This paper describes about the measured results of flux density distribution of YBCO bulk in the various frequencies of air-core coils currents

  12. Surface flux density distribution characteristics of bulk high- Tc superconductor in external magnetic field

    Science.gov (United States)

    Torii, S.; Yuasa, K.

    2004-10-01

    Various magnetic levitation systems using oxide superconductors are developed as strong pinning forces are obtained in melt-processed bulk. However, the trapped flux of superconductor is moved by flux creep and fluctuating magnetic field. Therefore, to examine the internal condition of superconductor, the authors measure the dynamic surface flux density distribution of YBCO bulk. Flux density measurement system has a structure with the air-core coil and the Hall sensors. Ten Hall sensors are arranged in series. The YBCO bulk, which has 25 mm diameter and 13 mm thickness, is field cooled by liquid nitrogen. After that, magnetic field is changed by the air-core coil. This paper describes about the measured results of flux density distribution of YBCO bulk in the various frequencies of air-core coils currents.

  13. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  14. A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models

    Directory of Open Access Journals (Sweden)

    Francesco Alessandro Massucci

    2013-09-01

    Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.

  15. Flux distribution measurements in the Bruce B Unit 6 reactor using a transportable traveling flux detector system

    International Nuclear Information System (INIS)

    Leung, T.C.; Drewell, N.H.; Hall, D.S.; Lopez, A.M.

    1987-01-01

    A transportable traveling flux detector (TFD) system for use in power reactors has been developed and tested at Chalk River Nuclear Labs. in Canada. It consists of a miniature fission chamber, a motor drive mechanism, a computerized control unit, and a data acquisition subsystem. The TFD system was initially designed for the in situ calibration of fixed self-powered detectors in operating power reactors and for flux measurements to verify reactor physics calculations. However, this system can also be used as a general diagnostic tool for the investigation of apparent detector failures and flux anomalies and to determine the movement of reactor internal components. This paper describes the first successful use of the computerized TFD system in an operating Canada deuterium uranium (CANDU) power reactor and the results obtained from the flux distribution measurements. An attempt is made to correlate minima in the flux profile with the locations of fuel channels so that future measurements can be used to determine the sag of the channels. Twenty-seven in-core flux detector assemblies in the 855-MW (electric) Unit 6 reactor of the Ontario Hydro Bruce B Generating Station were scanned

  16. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  17. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  18. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  19. Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence

    Directory of Open Access Journals (Sweden)

    C. C. Wu

    2011-04-01

    Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.

  20. Implementation of drift-flux correlations in ARTIST and its assessment in comparison with THETIS void distribution

    International Nuclear Information System (INIS)

    Yun, B. J.; Kim, H. C.; Moon, S. K.; Lee, W. J.

    1998-01-01

    Non-homogeneous, non-equilibrium drift-flux model was developed in ARTIST code to enhance capability of predicting two-phase flow void distribution at low pressure and low flow conditions. The governing equations of ARTIST code consist of three continuity equations (mixture, liquid, and noncondensibles), two energy equations (gas and mixture) and one mixture momentum equation constituted with the drift-flux model. In order to provide the Co and the Vgj of drift-flux model, four drift-flux correlations, which are Chexal-Lellouche, Ohkawa-Lahey, GE Ramp and Dix models, are implemented. In order to evaluate the accuracy of the drift flux correlations, the steady state void distributions of the THETIS boil-off tests are simulated. The results show that the drift-flux model is quite satisfactory in terms of accuracy and computational efficiency. Among the four drift-flux correlations, the Chexal-Lellouche model showed wide applicability in the prediction of void fraction from low to high pressure condition. Especially, the axial void distribution at low pressure and low flow is far better than those of both the two-fluid model of RELAP5/MOD3 code and the homogeneous model. Thus, the drift-flux model of the ARTIST code can be used as an efficient tool in predicting the void distribution of two-phase flow at low pressure and low flow conditions

  1. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  2. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  3. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  4. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  5. The joint probability distribution of structure factors incorporating anomalous-scattering and isomorphous-replacement data

    International Nuclear Information System (INIS)

    Peschar, R.; Schenk, H.

    1991-01-01

    A method to derive joint probability distributions of structure factors is presented which incorporates anomalous-scattering and isomorphous-replacement data in a unified procedure. The structure factors F H and F -H , whose magnitudes are different due to anomalous scattering, are shown to be isomorphously related. This leads to a definition of isomorphism by means of which isomorphous-replacement and anomalous-scattering data can be handled simultaneously. The definition and calculation of the general term of the joint probability distribution for isomorphous structure factors turns out to be crucial. Its analytical form leads to an algorithm by means of which any particular joint probability distribution of structure factors can be constructed. The calculation of the general term is discussed for the case of four isomorphous structure factors in P1, assuming the atoms to be independently and uniformly distributed. A main result is the construction of the probability distribution of the 64 triplet phase sums present in space group P1 amongst four isomorphous structure factors F H , four isomorphous F K and four isomorphous F -H-K . The procedure is readily generalized in the case where an arbitrary number of isomorphous structure factors are available for F H , F K and F -H-K . (orig.)

  6. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  7. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    International Nuclear Information System (INIS)

    Marshman, Emily; Singh, Chandralekha

    2017-01-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations. (paper)

  8. Implementation of drift-flux model in artist and assessment to thetis void distribution

    International Nuclear Information System (INIS)

    Kim, H. C.; Yun, B. J.; Moon, S. K.; Jeong, J. J.; Lee, W. J.

    1998-01-01

    A system transient analysis code, ARTIST, based on the drift-flux model is being developed to enhance capability of predicting two-phase flow void distribution at low pressure and low flow conditions. The governing equations of the ARTIST code consist of three continuity equations (mixture, liquid, and noncondensibles), two energy equations (gas and mixture) and one mixture momentum euqation constituted with the drift-flux model. Area averaged one-dimensional conservation equations are established using the flow quality expressed in terms of the relative velocity. The relative velocity is obtained from the drift flux relationship. The Chexal-Lellouche void fraction correlation is used to provide the drift velocity and the concentration parameter. The implicit one-step method and the block elimination technique are employed as numerical solution scheme for the node-flowpath thermal-hydraulic network. In order to validate the ARIST code, the steady state void distributions of the THETIS boil-off tests are simulated. The axial void distributions calculated by the Chexal-Lellouche fraction correlation at low pressure and low flow are better than those of both the two-fluid model of RELAP5/MOD3 code and the homogeneous model. The drift-flux model of the ARTIST code is an efficient tool in predicting the void distribution of two-phase flow at low pressure and low flow condtions

  9. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  10. Self-similarity of fluctuation particle fluxes in the plasma edge of the stellarator L-2M

    Energy Technology Data Exchange (ETDEWEB)

    Saenko, V.V. [Ulyanovsk State University, Leo Tolstoy str., 42, Ulyanovsk (Russian Federation)

    2010-05-15

    Results are presented of statistical studies of probability density of fluctuations of plasma density, floating potential, and turbulent particle fluxes measured by a Langmuir probe in the edge plasma of the L-2M stellarator. Empirical probability densities differ from Gaussian distributions. The empirical probability density distributions have heavy tails decreasing as x{sup -{alpha}}{sup -1} and are leptokurtic. Fractional stable distributions were successfully applied to describing such distributions. It is shown that fractional stable distributions give good fit to the distri-butions of increments of fluctuation amplitudes of physical variables under study. The distribution parameters are statistically estimated from measured time sequences (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  11. Self-similarity of fluctuation particle fluxes in the plasma edge of the stellarator L-2M

    International Nuclear Information System (INIS)

    Saenko, V.V.

    2010-01-01

    Results are presented of statistical studies of probability density of fluctuations of plasma density, floating potential, and turbulent particle fluxes measured by a Langmuir probe in the edge plasma of the L-2M stellarator. Empirical probability densities differ from Gaussian distributions. The empirical probability density distributions have heavy tails decreasing as x -α-1 and are leptokurtic. Fractional stable distributions were successfully applied to describing such distributions. It is shown that fractional stable distributions give good fit to the distri-butions of increments of fluctuation amplitudes of physical variables under study. The distribution parameters are statistically estimated from measured time sequences (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Multimode Interference: Identifying Channels and Ridges in Quantum Probability Distributions

    OpenAIRE

    O'Connell, Ross C.; Loinaz, Will

    2004-01-01

    The multimode interference technique is a simple way to study the interference patterns found in many quantum probability distributions. We demonstrate that this analysis not only explains the existence of so-called "quantum carpets," but can explain the spatial distribution of channels and ridges in the carpets. With an understanding of the factors that govern these channels and ridges we have a limited ability to produce a particular pattern of channels and ridges by carefully choosing the ...

  13. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  14. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  15. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  16. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  17. Neutron Flux Distribution on Neutron Radiography Facility After Fixing the Collimator

    International Nuclear Information System (INIS)

    Supandi; Parikin; Mohtar; Sunardi; Roestam, S

    1996-01-01

    The Radiography Neutron Facility consists of an inner collimator, outer collimator, main shutter, second shutter and the sample chamber with 300 mm in diameter. Neutron beam quality depends on the neutron flux intensities distribution, L/D ratio Cd ratio, neutron/gamma ratio. The results show that the neutron flux intensity was 2.83 x 107 n cm-2.s-1, with deviation of + 7.8 % and it was distributed homogeneously at the sample position of 200 mm diameter. The beam characteristics were L/D ratio 98 and Rod 8, and neutron gamma ratio 3.08 x 105n.cm-2.mR-1 and Reactor Power was 20 MW. This technique can be used to examine sample with diameter of < 200 mm

  18. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  19. Calibration of a distributed hydrology and land surface model using energy flux measurements

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  20. The p-sphere and the geometric substratum of power-law probability distributions

    International Nuclear Information System (INIS)

    Vignat, C.; Plastino, A.

    2005-01-01

    Links between power law probability distributions and marginal distributions of uniform laws on p-spheres in R n show that a mathematical derivation of the Boltzmann-Gibbs distribution necessarily passes through power law ones. Results are also given that link parameters p and n to the value of the non-extensivity parameter q that characterizes these power laws in the context of non-extensive statistics

  1. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  2. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  3. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  4. Rocket measurements of relativistic electrons: New features in fluxes, spectra and pitch angle distributions

    International Nuclear Information System (INIS)

    Herrero, F.A.; Baker, D.N.; Goldberg, R.A.

    1991-01-01

    The authors report new features of precipitating relativistic electron fluxes measured on a spinning sounding rocket payload at midday between altitudes of 70 and 130 km in the auroral region (Poker Flat, Alaska, 65.1 degree N, 147.5 degree W, and L = 5.5). The sounding rocket (NASA 33.059) was launched at 21:29 UT on May 13, 1990 during a relativistic electron enhancement event of modest intensity. Electron fluxes were measured for a total of about 210 seconds at energies from 0.1 to 3.8 MeV, while pitch angle was sampled from 0 degree to 90 degree every spin cycle. Flux levels during the initial 90 seconds were about 5 to 8 times higher than in the next 120 seconds, revealing a time scale of more than 100 seconds for large amplitude intensity variations. A shorter time scale appeared for downward electron bursts lasting 10 to 20 seconds. Electrons with energies below about 0.2 MeV showed isotropic pitch angle distributions during most of the first 90 seconds of data, while at higher energies the electrons had highest fluxes near the mirroring angle (90 degree); when they occurred, the noted downward bursts were seen at all energies. Data obtained during the second half of the flight showed little variation in the shape of the pitch angle distribution for energies greater than 0.5 MeV; the flux at 90 degree was about 100 times the flux at 0 degree. They have compared the low altitude fluxes with those measured at geostationary orbit (L = 6.6), and find that the low altitude fluxes are much higher than expected from a simple mapping of a pancake distribution at high altitudes (at the equator). Energy deposition of this modest event is estimated to increase rapidly above 45 km, already exceeding the cosmic ray background at 45 km

  5. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  6. Critical sizes and flux distributions in the shut down pile

    International Nuclear Information System (INIS)

    Banchereau, A.; Berthier, P.; Genthon, J.P.; Gourdon, C.; Lattes, R.; Martelly, J.; Mazancourt, R. de; Portes, L.; Sagot, M.; Schmitt, A.P.; Tanguy, P.; Teste du Bailler, A.; Veyssiere, A.

    1957-01-01

    An important part of the experiments carried out on the reactor G1 during a period of shut-down has consisted in determinations of critical sizes, and measurements of flux distribution by irradiations of detectors. This report deals with the following points: 1- Critical sizes of the flat pile, the long pile and the uranium-thorium pile. 2- Flux charts of the same piles, and study of an exponential experiment. 3- Determination of the slit effect. 4- Calculation of the anisotropy of the lattice. 5- Description of the experimental apparatus of the irradiation measurements. (author) [fr

  7. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  8. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  9. Process and equipment for monitoring flux distribution in a nuclear reactor outside the core

    International Nuclear Information System (INIS)

    Graham, K.F.; Gopal, R.

    1977-01-01

    This concerns the monitoring system for axial flux distribution during the whole load operating range lying outside the core of, for example, a PWR. Flux distribution cards can be produced continuously. The core is divided into at least three sections, which are formed by dividing it at right angles to the longitudinal axis, and the flux is measured outside the core using adjacent detectors. Their output signals are calibrated by amplifiers so that the load distribution in the associated sections is reproduced. A summation of the calibrated output signals and the formation of a mean load signal takes place in summing stages. For monitoring, this is compared with a value which corresponds to the maximum permissible load setting. Apart from this the position of the control rods in the core can be taken into account by multiplication of the mean load signals by suitable peak factors. The distribution of monitoring positions or the position of the detectors can be progressive or symmetrical along the axis. (DG) 891 HP [de

  10. Proposal for a new method of reactor neutron flux distribution determination

    Energy Technology Data Exchange (ETDEWEB)

    Popic, V R [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1964-01-15

    A method, based on the measurements of the activity produced in a medium flowing with variable velocity through a reactor, for the determination of the neutron flux distribution inside a reactor is considered theoretically (author)

  11. Flux through a Markov chain

    International Nuclear Information System (INIS)

    Floriani, Elena; Lima, Ricardo; Ourrad, Ouerdia; Spinelli, Lionel

    2016-01-01

    Highlights: • The flux through a Markov chain of a conserved quantity (mass) is studied. • Mass is supplied by an external source and ends in the absorbing states of the chain. • Meaningful for modeling open systems whose dynamics has a Markov property. • The analytical expression of mass distribution is given for a constant source. • The expression of mass distribution is given for periodic or random sources. - Abstract: In this paper we study the flux through a finite Markov chain of a quantity, that we will call mass, which moves through the states of the chain according to the Markov transition probabilities. Mass is supplied by an external source and accumulates in the absorbing states of the chain. We believe that studying how this conserved quantity evolves through the transient (non-absorbing) states of the chain could be useful for the modelization of open systems whose dynamics has a Markov property.

  12. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  13. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    Science.gov (United States)

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  14. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  15. Surface flux density distribution characteristics of bulk high-T{sub c} superconductor in external magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Torii, S.; Yuasa, K

    2004-10-01

    Various magnetic levitation systems using oxide superconductors are developed as strong pinning forces are obtained in melt-processed bulk. However, the trapped flux of superconductor is moved by flux creep and fluctuating magnetic field. Therefore, to examine the internal condition of superconductor, the authors measure the dynamic surface flux density distribution of YBCO bulk. Flux density measurement system has a structure with the air-core coil and the Hall sensors. Ten Hall sensors are arranged in series. The YBCO bulk, which has 25 mm diameter and 13 mm thickness, is field cooled by liquid nitrogen. After that, magnetic field is changed by the air-core coil. This paper describes about the measured results of flux density distribution of YBCO bulk in the various frequencies of air-core coils currents.

  16. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  17. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  18. Dynamic SEP event probability forecasts

    Science.gov (United States)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  19. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  20. Optimization of neutron flux distribution in Isotope Production Reactor

    International Nuclear Information System (INIS)

    Valladares, G.L.

    1988-01-01

    In order to optimize the thermal neutrons flux distribution in a Radioisotope Production and Research Reactor, the influence of two reactor parameters was studied, namely the Vmod / Vcomb ratio and the core volume. The reactor core is built with uranium oxide pellets (UO 2 ) mounted in rod clusters, with an enrichment level of ∼3 %, similar to LIGHT WATER POWER REATOR (LWR) fuel elements. (author) [pt

  1. Conical pitch angle distributions of very-low energy ion fluxes observed by ISEE 1

    International Nuclear Information System (INIS)

    Horowitz, J.L.; Baugher, C.R.; Chappell, C.R.; Shelley, E.G.; Young, D.T.

    1982-01-01

    Observations of low-energy ionospheric ions by the plasma composition experiment abroad ISEE 1 often show conical pitch angle distributions, that is, peak fluxes between 0 0 and 90 0 to the directions parallel or antiparallel to the magnetic field. Frequently, all three primary ionospheric ion species (H + , He + , and O + ) simultaneously exhibit conical distributions with peak fluxes at essentially the same pitch angle. A distinction is made here between unidirectional, or streaming, distributions, in which ions are traveling essentially from only one hemisphere, and symmetrical distributions, in which significant fluxes are observed traveling from both hemispheres. The orbital coverage for this survey was largely restricted to the night sector, approximately 2100--0600 LT, and moderate geomagnetic latitudes of 20 0 --40 0 . Also, lack of complete pitch angle coverage at all times may have reduced detection for conics with small cone angles. However, we may conclude that the unidirectional conical distributions observed in the northern hemisphere are always observed to be traveling from the northern hemisphere and that they exhibit the following characteristics relative to the symmetric distributions, in that they (1) are typically observed on higher L shells (that is, higher geomagnetic latitudes or larger geocentric distances or both), (2) tend to have significantly larger cone angles, and (3), are associated with higher magnetic activity levels

  2. Probabilities of filaments in a Poissonian distribution of points -I

    International Nuclear Information System (INIS)

    Betancort-Rijo, J.

    1989-01-01

    Statistical techniques are devised to assess the likelihood of a Poisson sample of points in two and three dimensions, containing specific filamentary structures. For that purpose, the expression of Otto et al (1986. Astrophys. J., 304) for the probability density of clumps in a Poissonian distribution of points is generalized for any value of the density contrast. A way of counting filaments differing from that of Otto et al. is proposed, because at low density contrast the filaments counted by Otto et al. are distributed in a clumpy fashion, each clump of filaments corresponding to a distinct observed filament. (author)

  3. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  4. Visualization of neutron flux and power distributions in TRIGA Mark II reactor as an educational tool

    International Nuclear Information System (INIS)

    Snoj, Luka; Ravnik, Matjaz; Lengar, Igor

    2008-01-01

    Modern Monte Carlo computer codes (e.g. MCNP) for neutron transport allow calculation of detailed neutron flux and power distribution in complex geometries with resolution of ∼1 mm. Moreover they enable the calculation of individual particle tracks, scattering and absorption events. With the use of advanced software for 3D visualization (e.g. Amira, Voxler, etc.) one can create and present neutron flux and power distribution in a 'user friendly' way convenient for educational purposes. One can view axial, radial or any other spatial distribution of the neutron flux and power distribution in a nuclear reactor from various perspectives and in various modalities of presentation. By visualizing the distribution of scattering and absorption events and individual particle tracks one can visualize neutron transport parameters (mean free path, diffusion length, macroscopic cross section, up-scattering, thermalization, etc.) from elementary point of view. Most of the people remember better, if they visualize the processes. Therefore the representation of the reactor and neutron transport parameters is a convenient modern educational tool for the (nuclear power plant) operators, nuclear engineers, students and specialists involved in reactor operation and design. The visualization of neutron flux and power distributions in Jozef Stefan Institute TRIGA Mark II research reactor is treated in the paper. The distributions are calculated with MCNP computer code and presented using Amira and Voxler software. The results in the form of figures are presented in the paper together with comments qualitatively explaining the figures. (authors)

  5. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  6. Noninvasive ultrasonic measurements of temperature distribution and heat fluxes in nuclear systems

    International Nuclear Information System (INIS)

    Jia, Yunlu; Skliar, Mikhail

    2015-01-01

    Measurements of temperature and heat fluxes through structural materials are important in many nuclear systems. One such example is dry storage casks (DSC) that are built to store highly radioactive materials, such as spent nuclear reactor fuel. The temperature inside casks must be maintained within allowable limits of the fuel assemblies and the DSC components because many degradation mechanisms are thermally controlled. In order to obtain direct, real-time measurements of temperature distribution without insertion of sensing elements into harsh environment of storage casks, we are developing noninvasive ultrasound (US) methods for measuring spatial distribution of temperature inside solid materials, such as concrete overpacks, steel casings, thimbles, and rods. The measured temperature distribution can then be used to obtain heat fluxes that provide calorimetric characterisation of the fuel decay, fuel distribution inside the cask, its integrity, and accounting of nuclear materials. The physical basis of the proposed approach is the temperature dependence of the speed of sound in solids. By measuring the time it takes an ultrasound signal to travel a known distance between a transducer and a receiver, the indication about the temperature distribution along the path of the ultrasound propagation may be obtained. However, when temperature along the path of US propagation is non-uniform, the overall time of flight of an ultrasound signal depends on the temperature distribution in a complex and unknown way. To overcome this difficulty, the central idea of our method is to create an US propagation path inside material of interest which incorporates partial ultrasound reflectors (back scatterers) at known locations and use the train of created multiple echoes to estimate the temperature distribution. In this paper, we discuss experimental validation of this approach, the achievable accuracy and spatial resolution of the measured temperature profile, and stress the

  7. Spatial distribution of potential near surface moisture flux at Yucca Mountain

    International Nuclear Information System (INIS)

    Flint, A.L.; Flint, L.E.

    1994-01-01

    An estimate of the areal distribution of present-day surface liquid moisture flux at Yucca Mountain was made using field measured water contents and laboratory measured rock properties. Using available data for physical and hydrologic properties (porosity, saturated hydraulic conductivity, moisture retention functions) of the volcanic rocks, surface lithologic units that are hydrologically similar were delineated. Moisture retention and relative permeability functions were assigned to each surface unit based on the similarity of the mean porosity and saturated hydraulic conductivity of the surface unit to laboratory samples of the same lithology. The potential flux into the mountain was estimated for each surface hydrologic unit using the mean saturated hydraulic conductivity for each unit and assuming all matrix flow. Using measured moisture profiles for each of the surface units, estimates were made of the depth at which seasonal fluctuations diminish and steady state downward flux conditions are likely to exist. The hydrologic properties at that depth were used with the current relative saturation of the tuff, to estimate flux as the unsaturated hydraulic conductivity. This method assumes a unit gradient. The range in estimated flux was 0.02 mm/yr for the welded Tiva Canyon to 13.4 mm/yr for the nonwelded Paintbrush Tuff. The areally averaged flux was 1.4 mm/yr. The major zones of high flux occur to the north of the potential repository boundary where the nonwelded tuffs are exposed in the major drainages

  8. Spatial distribution of potential near surface moisture flux at Yucca Mountain

    International Nuclear Information System (INIS)

    Flint, A.L.; Flint, L.E.

    1994-01-01

    An estimate of the areal distribution of present-day surface liquid moisture flux at Yucca Mountain was made using field measured water contents and laboratory measured rock properties. Using available data for physical and hydrologic properties (porosity, saturated hydraulic conductivity moisture retention functions) of the volcanic rocks, surface lithologic units that are hydrologically similar were delineated. Moisture retention and relative permeability functions were assigned to each surface unit based on the similarity of the mean porosity and saturated hydraulic conductivity of the surface unit to laboratory samples of the same lithology. The potential flux into the mountain was estimated for each surface hydrologic unit using the mean saturated hydraulic conductivity for each unit and assuming all matrix flow. Using measured moisture profiles for each of the surface units, estimates were made of the depth at which seasonal fluctuations diminish and steady state downward flux conditions are likely to exist. The hydrologic properties at that depth were used with the current relative saturation of the tuff, to estimate flux as the unsaturated hydraulic conductivity. This method assumes a unit gradient. The range in estimated flux was 0.02 mm/yr for the welded Tiva Canyon to 13.4 mm/yr for the nonwelded Paintbrush Tuff. The areally averaged flux was 1.4 mm/yr. The major zones of high flux occur to the north of the potential repository boundary where the nonwelded tuffs are exposed in the major drainages

  9. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  10. Measurement of two-dimensional thermal neutron flux in a water phantom and evaluation of dose distribution characteristics

    International Nuclear Information System (INIS)

    Yamamoto, Kazuyoshi; Kumada, Hiroaki; Kishi, Toshiaki; Torii, Yoshiya; Horiguchi, Yoji

    2001-03-01

    To evaluate nitrogen dose, boron dose and gamma-ray dose occurred by neutron capture reaction of the hydrogen at the medical irradiation, two-dimensional distribution of the thermal neutron flux is very important because these doses are proportional to the thermal neutron distribution. This report describes the measurement of the two-dimensional thermal neutron distribution in a head water phantom by neutron beams of the JRR-4 and evaluation of the dose distribution characteristic. Thermal neutron flux in the phantom was measured by gold wire placed in the spokewise of every 30 degrees in order to avoid the interaction. Distribution of the thermal neutron flux was also calculated using two-dimensional Lagrange's interpolation program (radius, angle direction) developed this time. As a result of the analysis, it was confirmed to become distorted distribution which has annular peak at outside of the void, though improved dose profile of the deep direction was confirmed in the case which the radiation field in the phantom contains void. (author)

  11. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  12. Impact of high-flux haemodialysis on the probability of target attainment for oral amoxicillin/clavulanic acid combination therapy.

    Science.gov (United States)

    Hui, Katrina; Patel, Kashyap; Kong, David C M; Kirkpatrick, Carl M J

    2017-07-01

    Clearance of small molecules such as amoxicillin and clavulanic acid is expected to increase during high-flux haemodialysis, which may result in lower concentrations and thus reduced efficacy. To date, clearance of amoxicillin/clavulanic acid (AMC) during high-flux haemodialysis remains largely unexplored. Using published pharmacokinetic parameters, a two-compartment model with first-order input was simulated to investigate the impact of high-flux haemodialysis on the probability of target attainment (PTA) of orally administered AMC combination therapy. The following pharmacokinetic/pharmacodynamic targets were used to calculate the PTA. For amoxicillin, the time that the free concentration remains above the minimum inhibitory concentration (MIC) of ≥50% of the dosing period (≥50%ƒT >MIC ) was used. For clavulanic acid, the time that the free concentration was >0.1 mg/L of ≥45% of the dosing period (≥45%ƒT >0.1 mg/L ) was used. Dialysis clearance reported in low-flux haemodialysis for both compounds was doubled to represent the likely clearance during high-flux haemodialysis. Monte Carlo simulations were performed to produce concentration-time profiles over 10 days in 1000 virtual patients. Seven different regimens commonly seen in clinical practice were explored. When AMC was dosed twice daily, the PTA was mostly ≥90% for both compounds regardless of when haemodialysis commenced. When administered once daily, the PTA was 20-30% for clavulanic acid and ≥90% for amoxicillin. The simulations suggest that once-daily orally administered AMC in patients receiving high-flux haemodialysis may result in insufficient concentrations of clavulanic acid to effectively treat infections, especially on days when haemodialysis occurs. Copyright © 2017 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  13. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  14. Spatial distribution of neutron flux for the A-711 neutron generator

    International Nuclear Information System (INIS)

    Essiet, A. E.; Owolabi, S. A.; Adesanmi, C. A.; Balogun, F. A.

    1996-01-01

    The spatial distribution of neutron flux for the Kaman sciences A-711 neutron generator recently installed at the Centre for Energy Research and Development (CERD), Ile-Ife Nigeria has been determined. At an operational tube current of 2.0 mA and high voltage power supply (HVPS) of 158 kV, the neutron flux increases from 1.608 ± 0.021*10 8 n/cm 2 s at the top of the irradiated plastic vial to 2.640 ± 0.022*10 8 n/cm 2 s at the centre, and then decreases to 1.943 ± 0.02* 8 n/cm 2 s at the bottom. The flux density is strongly dependent on the diameter of deuteron at the tritium target, and within this range a source strength of 10 8 n/s has been measured for the A-711 neutron generator

  15. Flux pinning property in a single crystal NdBa2Cu3Oy superconductor

    International Nuclear Information System (INIS)

    Hasan, M.N.; Kurokawa, T.; Kiuchi, M.; Otabe, E.S.; Matsushita, T.; Chikumoto, N.; Machi, T.; Muralidhar, M.; Murakami, M.

    2005-01-01

    The critical current density J c and the apparent pinning potential U 0 * in a single crystal NdBa 2 Cu 3 O y superconductor which shows a broad peak effect are investigated by measuring a DC magnetization and its relaxation. The field-induced pinning mechanism does not explain the temperature dependence of peak field B p and dip field B d . The experimental results of J c and U 0 * are compared with the theoretical analysis based on the flux creep-flow model, taking the distribution of the flux pinning strength into account. The number of flux lines in the flux bundle (g 2 ), the most probable value of pinning strength (A m ) and distribution width (σ 2 ) are determined so that a good fit is obtained between the experimental and theoretical results. The behavior of these parameters is discussed in correspondence to the disorder transition of flux lines

  16. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  17. Calculation of the magnetic flux density distribution in type-II superconductors with finite thickness and well-defined geometry

    International Nuclear Information System (INIS)

    Forkl, A.; Kronmueller, H.

    1995-01-01

    The distribution of the critical current density j c (r) in hard type-II superconductors depends strongly on their sample geometry. Rules are given for the construction of j c (r). Samples with homogeneous thickness are divided into cakelike regions with a unique current direction. The spatial magnetic flux density distribution and the magnetic polarization of such a cakelike unit cell with homogeneous current density are calculated analytically. The magnetic polarization and magnetic flux density distribution of a superconductor in the mixed state is then given by an adequate superposition of the unit cell solutions. The theoretical results show good agreement with magneto-optically determined magnetic flux density distributions of a quadratic thin superconducting YBa 2 Cu 3 O 7-x film. The current density distribution is discussed for several sample geometries

  18. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  19. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  20. Study of the neutron flux distribution in acylindrical reactor

    Directory of Open Access Journals (Sweden)

    A. Vidal-Ferràndiz

    2017-08-01

    Full Text Available In the Energy Engineering Degree of the Universitat Politècnica de València, the students attend to the Nuclear Technology course, in which the basic knowledge of this technology is presented. A main objective of this technology is to obtain neutron population distribution inside a reactor core, in order to maintain the fission reaction chain. As this activity cannot be experimentally developed, mathematical modelling is of great importance to achieve such objective.  One of the computer laboratories proposed consists in the neutron flux determination analytically and numerically in a cylindrical geometry. The analytical solution makes use of the Bessel functions and is a good example of their applications. Alternatively, a numerical solution based on finite differences is used to obtain an approximate solution of the neutron flux. In this work, different discretizations of the cylindrical geometry are implemented and their results are compared.

  1. Extreme points of the convex set of joint probability distributions with ...

    Indian Academy of Sciences (India)

    Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.

  2. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  3. Phase transitions and flux distributions of SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Peng, Yingcai.

    1993-01-01

    The strong interactions between quarks are believed to be described by Quantum Chromodynamics (QCD), which is a non-abelian SU(3) gauge theory. It is known that QCD undergoes a deconfining phase transition at very high temperatures, that is, at low temperatures QCD is in confined phase, at sufficient high temperatures it is in an unconfined phase. Also, quark confinement is believed to be due to string formation. In this dissertation the authors studied SU(2) gauge theory using numerical methods of LGT, which will provide some insights about the properties of QCD because SU(2) is similar to SU(3). They measured the flux distributions of a q bar q pair at various temperatures in different volumes. They find that in the limit of infinite volumes the flux distribution is different in the two phases. In the confined phase strong evidence is found for the string formation, however, in the unconfined phase there is no string formation. On the other hand, in the limit of zero temperature and finite volumes they find a clear signal for string formation in the large volume region, however, the string tension measured in intermediate volumes is due to finite volume effects, there is no intrinsic string formation. The color flux energies (action) of the q bar q pair are described by Michael sum rules. The original Michael sum rules deal with a static q bar q pair at zero temperature in infinite volumes. To check these sum rules with flux data at finite temperatures, they present a complete derivation for the sum rules, thus generalizing them to account for finite temperature effects. They find that the flux data are consistent with the prediction of generalized sum rules. The study elucidates the rich structures of QCD, and provides evidence for quark confinement and string formation. This supports the belief that QCD is a correct theory for strong interactions, and quark confinement can be explained by QCD

  4. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  5. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  6. Joint probability distributions and fluctuation theorems

    International Nuclear Information System (INIS)

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  7. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  8. Development of a position-sensitive fission counter and measurement of neutron flux distributions

    International Nuclear Information System (INIS)

    Yamagishi, Hideshi; Soyama, Kazuhiko; Kakuta, Tsunemi

    2001-08-01

    A position-sensitive fission counter (PSFC) that operates in high neutron flux and high gamma-ray background such as at the side of a power reactor vessel has been developed. Neutron detection using the PSFC with a solenoid electrode is based on a delay-line method. The PSFC that has the outer diameter of 25 mm and the sensitive length of 1000 mm was manufactured for investigation of the performances. The PSFC provided output current pulses that were sufficiently higher than the alpha noise, though the PSFC has a solenoid electrode and large electrode-capacitance. The S/N ratio of PSFC outputs proved to be higher than that of ordinary fission counters with 200 mm sensitive length. A performance test to measure neutron flux distributions by a neutron measuring system with the PSFC was carried out by the side of a graphite pile, W2.4 x H1.4 x L1.2 m, with neutron sources, Am-Be 370 GBq x 2. It was confirmed that the neutron flux distribution was well measured with the system. (author)

  9. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  10. The probability distribution of intergranular stress corrosion cracking life for sensitized 304 stainless steels in high temperature, high purity water

    International Nuclear Information System (INIS)

    Akashi, Masatsune; Kenjyo, Takao; Matsukura, Shinji; Kawamoto, Teruaki

    1984-01-01

    In order to discuss the probability distribution of intergranular stress corrsion carcking life for sensitized 304 stainless steels, a series of the creviced bent beem (CBB) and the uni-axial constant load tests were carried out in oxygenated high temperature, high purity water. The following concludions were resulted; (1) The initiation process of intergranular stress corrosion cracking has been assumed to be approximated by the Poisson stochastic process, based on the CBB test results. (2) The probability distribution of intergranular stress corrosion cracking life may consequently be approximated by the exponential probability distribution. (3) The experimental data could be fitted to the exponential probability distribution. (author)

  11. Characteristics of ion distribution functions in dipolarizing flux bundles: Event studies

    Science.gov (United States)

    Runov, A.; Angelopoulos, V.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.-Z.

    2017-06-01

    Taking advantage of multipoint observations from a repeating configuration of the five Time History of Events and Macroscale Interactions during Substorms (THEMIS) probes separated by 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions collected by the probes during three dipolarizing flux bundle (DFB) events observed at geocentric distances 9 energy and twice the thermal energy, although the distribution in the ambient plasma sheet was isotropic. The anisotropic ion distribution in DFBs injected toward the inner magnetosphere may provide the free energy for waves and instabilities, which are important elements of particle energization.

  12. Subchannel measurements of the equilibrium quality and mass flux distribution in a rod bundle

    International Nuclear Information System (INIS)

    Lahey, R.T. Jr.

    1986-01-01

    An experiment was performed to measure the equilibrium subchannel void and mass flux distribution in a simulated BWR rod bundle. These new equilibrium subchannel data are unique and represent an excellent basis for subchannel ''void drift'' model development and assessment. Equilibrium subchannel void and mass flux distributions have been determined from the data presented herein. While the form of these correlations agree with the results of previous theoretical investigations, they should be generalized with caution since the current data base has been taken at only one (low) system pressure. Clearly there is a need for equilibrium subchannel data at higher system pressures if mechanistic subchannel models are to be developed

  13. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  14. Extreme fluxes in solar energetic particle events: Methodological and physical limitations

    International Nuclear Information System (INIS)

    Miroshnichenko, L.I.; Nymmik, R.A.

    2014-01-01

    In this study, all available data on the largest solar proton events (SPEs), or extreme solar energetic particle (SEP) events, for the period from 1561 up to now are analyzed. Under consideration are the observational, methodological and physical problems of energy-spectrum presentation for SEP fluxes (fluences) near the Earth's orbit. Special attention is paid to the study of the distribution function for extreme fluences of SEPs by their sizes. The authors present advances in at least three aspects: 1) a form of the distribution function that was previously obtained from the data for three cycles of solar activity has been completely confirmed by the data for 41 solar cycles; 2) early estimates of extremely large fluences in the past have been critically revised, and their values were found to be overestimated; and 3) extremely large SEP fluxes are shown to obey a probabilistic distribution, so the concept of an “upper limit flux” does not carry any strict physical sense although it serves as an important empirical restriction. SEP fluxes may only be characterized by the relative probabilities of their appearance, and there is a sharp break in the spectrum in the range of large fluences (or low probabilities). It is emphasized that modern observational data and methods of investigation do not allow, for the present, the precise resolution of the problem of the spectrum break or the estimation of the maximum potentialities of solar accelerator(s). This limitation considerably restricts the extrapolation of the obtained results to the past and future for application to the epochs with different levels of solar activity. - Highlights: • All available data on the largest solar proton events (SPEs) are analyzed. • Distribution function obtained for 3 last cycles is confirmed for 41 solar cycles. • Estimates of extremely large fluences in the past are found to be overestimated. • Extremely large SEP fluxes are shown to obey a probabilistic distribution.

  15. On the issues of probability distribution of GPS carrier phase observations

    Science.gov (United States)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  16. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  17. Neutron flux distribution measurement in the elementary cell of the RB reactor

    Energy Technology Data Exchange (ETDEWEB)

    Takac, S [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1963-04-15

    The distribution of thermal neutrons was measured in the elementary cell with dysprosium foils for different lattice pitches and the results obtained were given. From the distributions measured average fluxes were determined for every single zone. By using the published data concerning effective absorption cross sections thermal utilization factors were calculated and their changes given as functions of lattice pitches: l = 8.0 cm; 11.3 cm and 17.9 cm. (author)

  18. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    International Nuclear Information System (INIS)

    Xu, Xin-Ping; Ide, Yusuke

    2016-01-01

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  19. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn [School of Physical Science and Technology, Soochow University, Suzhou 215006 (China); Ide, Yusuke [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa, 221-8686 (Japan)

    2016-10-15

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  20. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  1. Magneto-optical imaging of magnetic flux distribution in high-Tc superconductors

    International Nuclear Information System (INIS)

    Ueno, K.; Murakamia, H.; Kawayama, I.; Doda, Y.; Tonouchi, M.; Chikumoto, N.

    2004-01-01

    Prototype systems of home-made magneto-optical microscopes were fabricated, and preliminary studies were carried out using Bi 2 Sr 2 CaCu 2 O 8+δ single crystals and an YBa 2 Cu 3 O 7-δ superconductor vortex flow transistor. In the study using BSCCO crystals, we succeeded in the observation of magnetic flux penetration into half-peeled thin flake region on the crystal surface, and it was found that the magnetic fluxes penetrate in characteristic one-dimensional alignment almost along the crystal a-axis. On the other hand, in the study using the YBCO device clear changes in the generated magnetic field distribution could be detected corresponding to the current direction

  2. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  3. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  4. A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux

    DEFF Research Database (Denmark)

    Jorgensen, Andreas C. S.; Christensen-Dalsgaard, Jorgen

    2017-01-01

    We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, Be-7, B-8, N-13, O-15 and F-17 source reactions. We obtain good agreement between...

  5. 2-D temperature distribution and heat flux of PFC in 2011 KSTAR campaign

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Eunnam, E-mail: bang14@nfri.re.kr; Hong, Suk-Ho; Yu, Yaowei; Kim, Kyungmin; Kim, Hongtack; Kim, Hakkun; Lee, Kunsu; Yang, Hyunglyul

    2013-10-15

    Highlights: • The heat flux on PFC tiles of 12 s pulse duration and 630 kA plasma current is about 0.02 MW/m{sup 2}. • When the cryopump is operated, the heat flux of CD is higher than without cryopump. • The more H-mode duration is long, the more heat flux on divertor is high. -- Abstract: KSTAR has reached a plasma current up to 630 kA, plasma duration up to 12 s, and has achieved high confinement mode (H-mode) in 2011 campaign. The heat flux of PFC tile was estimated from the temperature increase of PFC since 2010. The heat flux of PFC tiles increases significantly with higher plasma current and longer pulse duration. The time-averaged heat flux of shots in 2010 campaign (with 3 s pulse durations and I{sub p} of 611 kA) is 0.01 MW/m{sup 2} while that in 2011 campaign (with 12 s pulse duration and I{sub p} of 630 kA) is about 0.02 MW/m{sup 2}. The heat flux at divertor is 1.4–2 times higher than that at inboard limiter or passive stabilizer. With the cryopump operation, the heat flux at the central divertor is higher than that without cryopump. The heat flux at divertor is proportional to, of course, the duration of H-mode. Furthermore, a software tool, which visualizes the 2D temperature distribution of PFC tile and estimates the heat flux in real time, is developed.

  6. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  7. A Bernstein-Von Mises Theorem for discrete probability distributions

    OpenAIRE

    Boucheron, S.; Gassiat, E.

    2008-01-01

    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0 on ℕ∖{0} and a sequence of truncation levels (kn)n satisfying kn3≤ninf i≤knθ0(i). Let θ̂ denote the maximum likelihood estimate of (θ0(i))i≤kn and let Δn(θ0) denote the kn-dimensional vector which i-th coordinate is defined by $\\sqrt{n}(\\hat{\\theta}_{n}(i)-\\theta_{0}(i))$ for 1≤i≤kn. We check that under mild ...

  8. MCNP and visualization of neutron flux and power distributions

    International Nuclear Information System (INIS)

    Snoj, L.; Lengar, I.; Zerovnik, G.; Ravnik, M.

    2009-01-01

    The visualization of neutron flux and power distributions in two nuclear reactors (TRIG A type research reactor and typical PWR) and one thermonuclear reactor (tokamak type) are treated in the paper. The distributions are calculated with MCNP computer code and presented using Amira and Voxler software. The results in the form of figures are presented in the paper together with comments qualitatively explaining the figures. The remembrance of most of the people is better, if they visualize a process. Therefore a representation of the reactor and neutron transport parameters is a convenient modern educational tool for (nuclear power plant) operators, nuclear engineers, students and specialists involved in reactor operation and design. Visualization is applicable not only in education and training, but also as a tool for core and irradiation planning. (authors)

  9. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  10. FluxVisualizer, a Software to Visualize Fluxes through Metabolic Networks

    Directory of Open Access Journals (Sweden)

    Tim Daniel Rose

    2018-04-01

    Full Text Available FluxVisualizer (Version 1.0, 2017, freely available at https://fluxvisualizer.ibgc.cnrs.fr is a software to visualize fluxes values on a scalable vector graphic (SVG representation of a metabolic network by colouring or increasing the width of reaction arrows of the SVG file. FluxVisualizer does not aim to draw metabolic networks but to use a customer’s SVG file allowing him to exploit his representation standards with a minimum of constraints. FluxVisualizer is especially suitable for small to medium size metabolic networks, where a visual representation of the fluxes makes sense. The flux distribution can either be an elementary flux mode (EFM, a flux balance analysis (FBA result or any other flux distribution. It allows the automatic visualization of a series of pathways of the same network as is needed for a set of EFMs. The software is coded in python3 and provides a graphical user interface (GUI and an application programming interface (API. All functionalities of the program can be used from the API and the GUI and allows advanced users to add their own functionalities. The software is able to work with various formats of flux distributions (Metatool, CellNetAnalyzer, COPASI and FAME export files as well as with Excel files. This simple software can save a lot of time when evaluating fluxes simulations on a metabolic network.

  11. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  12. Fast Radio Bursts’ Recipes for the Distributions of Dispersion Measures, Flux Densities, and Fluences

    Science.gov (United States)

    Niino, Yuu

    2018-05-01

    We investigate how the statistical properties of dispersion measure (DM) and apparent flux density/fluence of (nonrepeating) fast radio bursts (FRBs) are determined by unknown cosmic rate density history [ρ FRB(z)] and luminosity function (LF) of the transient events. We predict the distributions of DMs, flux densities, and fluences of FRBs taking account of the variation of the receiver efficiency within its beam, using analytical models of ρ FRB(z) and LF. Comparing the predictions with the observations, we show that the cumulative distribution of apparent fluences suggests that FRBs originate at cosmological distances and ρ FRB increases with redshift resembling the cosmic star formation history (CSFH). We also show that an LF model with a bright-end cutoff at log10 L ν (erg s‑1 Hz‑1) ∼ 34 are favored to reproduce the observed DM distribution if ρ FRB(z) ∝ CSFH, although the statistical significance of the constraints obtained with the current size of the observed sample is not high. Finally, we find that the correlation between DM and flux density of FRBs is potentially a powerful tool to distinguish whether FRBs are at cosmological distances or in the local universe more robustly with future observations.

  13. Measurement of neutron flux distribution by semiconductor detector

    International Nuclear Information System (INIS)

    Obradovic, D.; Bosevski, T.

    1964-01-01

    Application of semiconductor detectors for measuring neutron flux distribution is about 10 times faster than measurements by activation foils and demands significantly lower reactor power. Following corrections are avoided: mass of activation foils which influences the self shielding, nuclear decay during activity measurements; counter dead-time. It is possible to control the measured data during experiment and repeat measurements if needed. Precision of the measurement is higher since it is possible to choose the wanted statistics. The method described in this paper is applied for measurements at the RB reactor. It is concluded that the method is suitable for fast measurements but the activation analysis is still indispensable

  14. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  15. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  16. Thermal neutron flux distribution in the ET R R-1 reactor core as experimentally measured and theoretically calculated by the code triton

    Energy Technology Data Exchange (ETDEWEB)

    Imam, M [National center for nuclear safety and radiation control, atomic energy authority, Cairo, (Egypt)

    1995-10-01

    Thermal neutron flux distributions that were measured earlier at the ET-R R-1 reactor are compared with those calculated by the three dimensional diffusion code Triton. This comparison was made for the horizontal and vertical flux distributions. The horizontal thermal flux distributions considered in this comparison were along the core diagonals at two planes of different heights from core bottom, where one at a level passing through the control rod at core center and the other at a level below this control rod. In the meantime all the control rods were taken into consideration. The effect of the existence of a water cavity inside the core as well as the influence of the control rods on the thermal flux are illustrated in this work. The vertical thermal flux distributions considered in the comparison were at two positions in core namely; one along the core height the horizontal reactor power distribution along the core height and the horizontal reactor power distribution along the core diagonal as calculated by the code Triton are also given this work. 8 figs., 1 tab.

  17. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  18. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  19. Measurement of the neutron flux distributions, epithermal index, Westcott thermal neutron flux in the irradiation capsules of hydraulic conveyer (Hyd) and pneumatic tubes (Pn) facilities of the KUR

    International Nuclear Information System (INIS)

    Chatani, Hiroshi

    2001-05-01

    The reactions of Au(n, γ) 198 Au and Ti(n, p) 47 or 48 Sc were used for the measurements of the thermal and epithermal (thermal + epithermal) and the fast neutron flux distributions, respectively. In the case of Hyd (Hydraulic conveyer), the thermal + epithermal and fast neutron flux distributions in the horizontal direction in the capsule are especially flat; the distortion of the fluxes are 0.6% and 5.4%, respectively. However, these neutron fluxes in the vertical direction are low at the top and high at the bottom of the capsule. These differences between the top and bottom are 14% for both distributions. On the other hand, in polyethylene capsules of Pn-1, 2, 3 (Pneumatic tubes Nos. 1, 2, 3), in contrast with Hyd, these neutron flux distributions in the horizontal direction have gradients of 8 - 18% per 2.5 cm diameter, and those on the vertical axis have a distortion of approximately 5%. The strength of the epithermal dE/E component relative to the neutron density including both thermal and epithermal neutrons, i.e., the epithermal index, for the hydraulic conveyer (Hyd) and pneumatic tube No.2 (Pn-2), in which the irradiation experiments can be achieved, are determined by the multiple foil activation method using the reactions of Au(n, γ) 198 Au and Co(n, γ) 60(m+g) Co. The epithermal index observed in an aluminum capsule of Hyd is 0.034-0.04, and the Westcott thermal neutron flux is 1.2x10 14 cm -2 sec -1 at approximately 1 cm above the bottom. The epithermal index in a Pn-2 polyethylene capsule was measured by not only the multiple foil activation method but also the Cd-ratio method in which the Au(n, γ) 198 Au reaction in a cadmium cover is also used. The epithermal index is 0.045 - 0.055, and the thermal neutron flux is 1.8x10 13 cm -2 sec -1 . (J.P.N.)

  20. PHOTOMETRIC REDSHIFT PROBABILITY DISTRIBUTIONS FOR GALAXIES IN THE SDSS DR8

    International Nuclear Information System (INIS)

    Sheldon, Erin S.; Cunha, Carlos E.; Mandelbaum, Rachel; Brinkmann, J.; Weaver, Benjamin A.

    2012-01-01

    We present redshift probability distributions for galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 8 imaging data. We used the nearest-neighbor weighting algorithm to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8 and u < 29.0. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five-dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training-set redshifts. We derived P(z)'s for individual objects by using training-set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release. The newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary sources of error in the N(z) reconstruction to be sample variance and spectroscopic failures: The training sets are drawn from relatively small volumes of space, and some samples have large incompleteness. Using simulations we estimated the uncertainty in N(z) due to sample variance at a given redshift to be ∼10%-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS Web site.

  1. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  2. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  3. A brief introduction to probability.

    Science.gov (United States)

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  4. Numerical Analysis on Heat Flux Distribution through the Steel Liner of the Ex-vessel Core Catcher

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Se Hong; Choi, Choeng Ryul [ELSOLTEC, Yongin (Korea, Republic of); Kim, Byung Jo; Lee, Kyu Bok [KEPCO, Gimcheon (Korea, Republic of); Hwang, Do Hyun [KHNP-CRI, Daejeon (Korea, Republic of)

    2016-05-15

    In order to prevent material failure of steel container of the core catcher system due to high temperatures, heat flux through the steel liner wall must be kept below the critical heat flux (CHF), and vapor dry-out of the cooling channel must be avoided. In this study, CFD methodology has been developed to simulate the heat flux distribution in the core catcher system, involving following physical phenomena: natural convection in the corium pool, boiling heat transfer and solidification/melting of the corium. A CFD methodology has been developed to simulate the thermal/hydraulic phenomena in the core catcher system, and a numerical analysis has been carried out to estimate the heat flux through the steel liner of the core catcher. High heat flux values are formed at the free surface of the corium pool. However, the heat flux through the steel liner is maintained below the critical heat flux.

  5. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...

  6. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....

  7. The effect of temperature and the control rod position on the spatial neutron flux distribution in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2007-01-01

    The effect of water and fuel temperature increase and changes in the control rod positions on the spatial neutron flux distribution in the Syrian Miniature Neutron Source Reactor (MNSR) is discussed. The cross sections of all the reactor components at different temperatures are generated using the WIMSD4 code. These group constants are used then in the CITATION code to calculate the special neutron flux distribution using four energy groups. This work shows that water and fuel temperature increase in the reactor during the reactor daily operating time does not affect the spatial neutron flux distribution in the reactor. Changing the control rod position does not affect as well the spatial neutron flux distribution except in the region around the control rod position. This stability in the spatial neutron flux distribution, especially in the inner and outer irradiation sites, makes MNSR as a good tool for the neutron activation analysis (NAA) technique and production of radioisotopes with medium or short half lives during the reactor daily operating time. (author)

  8. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  9. Probability Distribution and Projected Trends of Daily Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER

    2013-01-01

    Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.

  10. Equilibrium quality and mass flux distributions in an adiabatic three-subchannel test section

    International Nuclear Information System (INIS)

    Yadigaroglu, G.; Maganas, A.

    1993-01-01

    An experiment was designed to measure the fully-developed quality and mass flux distributions in an adiabatic three-subchannel test section. The three subchannels had the geometrical characteristics of the corner, side, and interior subchannels of a BWR-5 rod bundle. Data collected with Refrigerant-144 at pressures ranging from 7 to 14 bar, simulating operation with water in the range 55 to 103 bar are reported. The average mass flux and quality in the test section were in the ranges 1300 to 1750 kg/m s and -0.03 to 0.25, respectively. The data are analyzed and presented in various forms

  11. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  12. Measured and Predicted Neutron Flux Distributions in a Material Surrounding a Cylindrical Duct

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, J; Sandlin, R

    1966-03-15

    The radial fast neutron flux attenuations in the material (iron) surrounding ducts of diameters 7, 9, and 15 cm and total duct length of about 1.5 m have been investigated with and without neutron scattering cans filled with D{sub 2}O in the duct. Experimentally the problem was solved by the use of foil activation techniques. Theoretically it was attacked by, in the first place, a Monte Carlo program specially written for this purpose and utilizing an importance sampling technique. In the second place non- and single-scattering removal flux codes were tried, and also simple hand calculations. The Monte Carlo results accounted well for the fast flux attenuation, while the non- and single-scattering methods overestimated the attenuation generally by a factor of 10 or less. Simple hand calculations using three empirical parameters could be fitted to the measured data within a factor of 1.2 - 1.3 at penetration depths greater than 3 - 4 cm. The distribution of the D{sub 2}O-scattered flux could well be described in terms of single scattering.

  13. PHOTOMETRIC REDSHIFTS AND QUASAR PROBABILITIES FROM A SINGLE, DATA-DRIVEN GENERATIVE MODEL

    International Nuclear Information System (INIS)

    Bovy, Jo; Hogg, David W.; Weaver, Benjamin A.; Myers, Adam D.; Hennawi, Joseph F.; McMahon, Richard G.; Schiminovich, David; Sheldon, Erin S.; Brinkmann, Jon; Schneider, Donald P.

    2012-01-01

    We describe a technique for simultaneously classifying and estimating the redshift of quasars. It can separate quasars from stars in arbitrary redshift ranges, estimate full posterior distribution functions for the redshift, and naturally incorporate flux uncertainties, missing data, and multi-wavelength photometry. We build models of quasars in flux-redshift space by applying the extreme deconvolution technique to estimate the underlying density. By integrating this density over redshift, one can obtain quasar flux densities in different redshift ranges. This approach allows for efficient, consistent, and fast classification and photometric redshift estimation. This is achieved by combining the speed obtained by choosing simple analytical forms as the basis of our density model with the flexibility of non-parametric models through the use of many simple components with many parameters. We show that this technique is competitive with the best photometric quasar classification techniques—which are limited to fixed, broad redshift ranges and high signal-to-noise ratio data—and with the best photometric redshift techniques when applied to broadband optical data. We demonstrate that the inclusion of UV and NIR data significantly improves photometric quasar-star separation and essentially resolves all of the redshift degeneracies for quasars inherent to the ugriz filter system, even when included data have a low signal-to-noise ratio. For quasars spectroscopically confirmed by the SDSS 84% and 97% of the objects with Galaxy Evolution Explorer UV and UKIDSS NIR data have photometric redshifts within 0.1 and 0.3, respectively, of the spectroscopic redshift; this amounts to about a factor of three improvement over ugriz-only photometric redshifts. Our code to calculate quasar probabilities and redshift probability distributions is publicly available.

  14. Measurements of neutron flux distributions in the core of the Ljubljana TRIGA Mark II Reactor

    International Nuclear Information System (INIS)

    Rant, J.; Ravnik, M.; Mele, I.; Dimic, V.

    2008-01-01

    Recently the Ljubljana TRIGA Mark II Reactor has been refurbished and upgraded to pulsed operation. To verify the core design calculations using TRIGAP and PULSTR1 codes and to obtain necessary data for future irradiation and neutron beam experiments, an extensive experimental program of neutron flux mapping and neutron field characterization was carried out. Using the existing neutron measuring thimbles complete axial and radial distributions in two radial directions were determined for two different core configurations. For one core configuration the measurements were also carried out in the pulsed mode. For flux distributions thin Cu (relative measurements) and diluted Au wires (absolute values) were used. For each radial position the cadmium ratio was determined in two axial levels. The core configuration was rather uniform, well defined (fresh fuel of a single type, including fuelled followers) and compact (no irradiation channels or gaps), offering unique opportunity to test the computer codes for TRIGA reactor calculations. The neutron flux measuring procedures and techniques are described and the experimental results are presented. The agreement between the predicted and measured power peaking factors are within the error limits of the measurements (<±5%) and calculations (±10%). Power peaking occurs in the B ring, and in the A ring (centre) there is a significant flux depression. (authors)

  15. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays

    Directory of Open Access Journals (Sweden)

    Nan Yao

    2017-11-01

    Full Text Available The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system’s redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation.

  16. Use of heterogeneous finite elements generated by collision probability solutions to calculate a pool reactor core

    International Nuclear Information System (INIS)

    Calabrese, C.R.; Grant, C.R.

    1990-01-01

    This work presents comparisons between measured fluxes obtained by activation of Manganese foils in the light water, enriched uranium research pool reactor RA-2 MTR (Materials Testing Reactors) fuel element) and fluxes calculated by the finite element method FEM using DELFIN code, and describes the heterogeneus finite elements by a set of solutions of the transport equations for several different configurations obtained using the collision probability code HUEMUL. The agreement between calculated and measured fluxes is good, and the advantage of using FEM is showed because to obtain the flux distribution with same detail using an usual diffusion calculation it would be necessary 12000 mesh points against the 2000 points that FEM uses, hence the processing time is reduced in a factor ten. An interesting alternative to use in MTR fuel management is presented. (Author) [es

  17. Reconstructing 3D profiles of flux distribution in array of unshunted Josephson junctions from 2D scanning SQUID microscope images

    International Nuclear Information System (INIS)

    Nascimento, F.M.; Sergeenkov, S.; Araujo-Moreira, F.M.

    2012-01-01

    By using a specially designed algorithm (based on utilizing the so-called Hierarchical Data Format), we report on successful reconstruction of 3D profiles of local flux distribution within artificially prepared arrays of unshunted Nb-AlO x -Nb Josephson junctions from 2D surface images obtained via the scanning SQUID microscope. The analysis of the obtained results suggest that for large sweep areas, the local flux distribution significantly deviates from the conventional picture and exhibits a more complicated avalanche-type behavior with a prominent dendritic structure. -- Highlights: ► The penetration of external magnetic field into an array of Nb-AlO x -Nb Josephson junctions is studied. ► Using Scanning SQUID Microscope, 2D images of local flux distribution within array are obtained. ► Using specially designed pattern recognition algorithm, 3D flux profiles are reconstructed from 2D images.

  18. Scarred resonances and steady probability distribution in a chaotic microcavity

    International Nuclear Information System (INIS)

    Lee, Soo-Young; Rim, Sunghwan; Kim, Chil-Min; Ryu, Jung-Wan; Kwon, Tae-Yoon

    2005-01-01

    We investigate scarred resonances of a stadium-shaped chaotic microcavity. It is shown that two components with different chirality of the scarring pattern are slightly rotated in opposite ways from the underlying unstable periodic orbit, when the incident angles of the scarring pattern are close to the critical angle for total internal reflection. In addition, the correspondence of emission pattern with the scarring pattern disappears when the incident angles are much larger than the critical angle. The steady probability distribution gives a consistent explanation about these interesting phenomena and makes it possible to expect the emission pattern in the latter case

  19. Loaded dice in Monte Carlo : importance sampling in phase space integration and probability distributions for discrepancies

    NARCIS (Netherlands)

    Hameren, Andreas Ferdinand Willem van

    2001-01-01

    Discrepancies play an important role in the study of uniformity properties of point sets. Their probability distributions are a help in the analysis of the efficiency of the Quasi Monte Carlo method of numerical integration, which uses point sets that are distributed more uniformly than sets of

  20. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  1. High-resolution dichroic imaging of magnetic flux distributions in superconductors with scanning x-ray microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ruoss, Stephen; Stahl, Claudia; Weigand, Markus; Schuetz, Gisela [Max-Planck-Institut fuer Intelligente Systeme, Stuttgart (Germany); Albrecht, Joachim [Research Institute for Innovative Surfaces, FINO, Aalen University (Germany)

    2015-07-01

    The penetration of magnetic flux into the high-temperature superconductor YBCO has been observed using a new high-resolution technique based on X-ray magnetic circular dichroism (XMCD). Superconductors coated with thin soft magnetic layers of CoFeB are observed in a scanning x-ray microscope providing cooling of the sample down to 83 K under the influence of external magnetic fields. Resulting electrical currents create an inhomogeneous magnetic field distribution above the superconductor which leads to a local reorientation of the ferromagnetic layer. X-ray absorption measurements with circular polarized radiation allows the analysis of the magnetic flux distribution in the superconductor via the ferromagnetic layer. In this work we present first images taken at 83K with high spatial resolution in the nanoscale.

  2. Entanglement probabilities of polymers: a white noise functional approach

    International Nuclear Information System (INIS)

    Bernido, Christopher C; Carpio-Bernido, M Victoria

    2003-01-01

    The entanglement probabilities for a highly flexible polymer to wind n times around a straight polymer are evaluated using white noise analysis. To introduce the white noise functional approach, the one-dimensional random walk problem is taken as an example. The polymer entanglement scenario, viewed as a random walk on a plane, is then treated and the entanglement probabilities are obtained for a magnetic flux confined along the straight polymer, and a case where an entangled polymer is subjected to the potential V = f-dot(s)θ. In the absence of the magnetic flux and the potential V, the entanglement probabilities reduce to a result obtained by Wiegel

  3. Airborne methane remote measurements reveal heavy-tail flux distribution in Four Corners region.

    Science.gov (United States)

    Frankenberg, C.

    2016-12-01

    Methane (CH4) impacts climate as the second strongest anthropogenic greenhouse gas and air quality by influencing tropospheric ozone levels. Space-based observations have identified the Four Corners region in the Southwest United States as an area of large CH4 enhancements. We conducted an airborne campaign in Four Corners during April 2015 with the next-generation Airborne Visible/Infrared Imaging Spectrometer (near-infrared) and Hyperspectral Thermal Emission Spectrometer (thermal infrared) imaging spectrometers to better understand the source of methane by measuring methane plumes at 1- to 3-m spatial resolution. Our analysis detected more than 250 individual methane plumes from fossil fuel harvesting, processing, and distributing infrastructures, spanning an emission range from the detection limit ˜ 2 kg/h to 5 kg/h through ˜ 5,000 kg/h. Observed sources include gas processing facilities, storage tanks, pipeline leaks, natural seeps and well pads, as well as a coal mine venting shaft. Overall, plume enhancements and inferred fluxes follow a lognormal distribution, with the top 10% emitters contributing 49 to 66% to the inferred total point source flux of 0.23 Tg/y to 0.39 Tg/y. We will summarize the campaign results and provide an overview of how airborne remote sensing can be used to detect and infer methane fluxes over widespread geographic areas and how new instrumentation could be used to perform similar observations from space.

  4. Spatially Distributed, Coupled Modeling of Plant Growth, Nitrogen and Water Fluxes in an Alpine Catchment

    Science.gov (United States)

    Schneider, K.

    2001-12-01

    Carbon, water and nitrogen fluxes are closely coupled. They interact and have many feedbacks. Human interference, in particular through land use management and global change strongly modifies these fluxes. Increasing demands and conflicting interests result in an increasing need for regulation targeting different aspects of the system. Without being their main target, many of these measures directly affect water quantity, quality and availability. Improved management and planning of our water resources requires the development of integrated tools, in particular since interactions of the involved environmental and social systems often lead to unexpected or adverse results. To investigate the effect of plant growth, land use management and global change on water fluxes and quality, the PROcess oriented Modular EnvironmenT and Vegetation Model (PROMET-V) was developed. PROMET-V models the spatial patterns and temporal course of water, carbon and nitrogen fluxes using process oriented and mechanistic model components. The hydrological model is based on the Penman-Monteith approach, it uses a plant-physiological model to calculate the canopy conductance, and a multi-layer soil water model. Plant growth for different vegetation is modelled by calculating canopy photosynthesis, respiration, phenology and allocation. Plant growth and water fluxes are coupled directly through photosynthesis and transpiration. Many indirect feedbacks and interactions occur due to their mutual dependency upon leaf area, root distribution, water and nutrient availability for instance. PROMET-V calculates nitrogen fluxes and transformations. The time step used depends upon the modelled process and varies from 1 hour to 1 day. The kernel model is integrated in a raster GIS system for spatially distributed modelling. PROMET-V was tested in a pre-alpine landscape (Ammer river, 709 km**2, located in Southern Germany) which is characterized by small scale spatial heterogeneities of climate, soil and

  5. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  6. Effect of thermohydraulic parameter on the flux distribution and the effective multiplication factor

    International Nuclear Information System (INIS)

    Mello, J.C.; Valladares, G.L.

    1990-01-01

    The influence of two thermohydraulics parameters; the coolant flow velocity along the reactor channels and the increase of the average water temperature through the core, on the thermal flux distribution and on the effective multiplication factor, was studied in a radioisotopes production reactor. The results show that, for a fixed values of the thermohydraulics parameters reffered above, there are limits for the reactor core volume reduction for each value of the V sub(mod)/V sub(comb) ratio. These thermohydraulics conditions determine the higher termal flux value in the flux-trap and the lower value of the reactor effective multiplication factor. It is also show that there is a V sub(mod)/V sub(comb) ratio value that correspond to the higher value of the lower effective multiplication factor. These results was interpreted and comment using fundamentals concepts and relations of reactor physics. (author)

  7. Two dimensional electron transport in disordered and ordered distributions of magnetic flux vortices

    International Nuclear Information System (INIS)

    Nielsen, M.; Hedegaard, P.

    1994-04-01

    We have considered the conductivity properties of a two dimensional electron gas (2DEG) in two different kinds of inhomogeneous magnetic fields, i.e. a disordered distribution of magnetic flux vortices, and a periodic array of magnetic flux vortices. The work falls in two parts. In the first part we show how the phase shifts for an electron scattering on an isolated vortex, can be calculated analytically, and related to the transport properties through the differential cross section. In the second part we present numerical results for the Hall conductivity of the 2DEG in a periodic array of flux vortices found by exact diagonalization. We find characteristic spikes in the Hall conductance, when it is plotted against the filling fraction. It is argued that the spikes can be interpreted in terms of ''topological charge'' piling up across local and global gaps in the energy spectrum. (au) (23 refs.)

  8. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  9. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B

    2007-01-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed

  10. Equilibrium quality and mass flux distributions in an adiabatic three-subchannel test section

    International Nuclear Information System (INIS)

    Yadigaroglu, G.; Maganas, A.

    1995-01-01

    An experiment was designed to measure the fully developed quality and mass flux distributions in an adiabatic three-subchannel test section. The three subchannels had the geometrical characteristics of the corner, side, and interior subchannels of a boiling water reactor (BWR-5) rod bundle. Data collected with Refrigerant-114 at pressures ranging from 7 to 14 bars, simulating operation with water in the range 55 to 103 bars are reported. The average mass flux and quality in the test section were in the ranges 1,300 to 1,750 kg/m 2 · s and -0.03 to 0.25, respectively. The data are analyzed and presented in various forms

  11. Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions

    CSIR Research Space (South Africa)

    Dove, Albert

    2017-01-01

    Full Text Available maintaining constraints in a DC-DC converter is investigated. A probability distribution whose aim is to ensure maximal harmonic spreading and yet mainaint constraints is presented. The PDFs are determined from a direct application of the method of Maximum...

  12. Fast neutron flux and intracranial dose distribution at a neutron irradiation facility

    International Nuclear Information System (INIS)

    Matsumoto, Tetsuo; Aizawa, Otohiko; Nozaki, Tetsuya

    1981-01-01

    A head phantom filled with water was used to measure the fast neutron flux using 115 In(n, n')sup(115m)In and 103 Rh(n, n')sup(103m)Rh reactions. γ-ray from sup(115m)In and x-ray from sup(103m)Rh were detected by a Ge(Li) and a Na(Tl)I counter, respectively. TLD was used to investigate the γ-dose rate distribution inside the phantom. Flux of fast neutron inside the phantom was about 1 x 10 6 n/cm 2 sec, which was 3 order smaller than that of thermal neutron. The fast neutron flux decreased to 1/10 at 15 cm depth, and γ-dose rate was about 200 R/h at 100 kW inside the phantom. Total dose at the surface was 350 rad/h, to which, fast neutrons contributed more than γ-rays. The rate of fast neutron dose was about 10% of thermal neutron's in Kerma dose unit (rad), however, the rate was highly dependent on RBE value. (Nakanishi, T.)

  13. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    CERN Document Server

    Murazaki, M; Uno, Y

    2003-01-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of +-13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, alpha, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and sup 3 He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, ...

  14. Determination and analysis of neutron flux distribution on radial Piercing beam port for utilization of Kartini research reactor

    International Nuclear Information System (INIS)

    Widarto

    2002-01-01

    Determination and analysis of neutron flux measurements on radial piercing beam port have been done as completion experimental data document and progressing on utilization of the Kartini research reactor purposes. The analysis and determination of the neutron flux have been carried out by using Au foils detector neutron activation analysis method which put on the radius of cross section (19 cm) and a long of radial piercing beam port (310 cm) Based on the calculation, distribution of the thermal neutron flux is around (8.3 ± 0.9) x 10 5 ncm -2 s -1 to (6.8 ± 0.5) x 10 7 ncm -2 s -1 and fast neutron is (5.0 ± 0.2) x 10 5 ncm -2 s -1 to (1.43 ± 0.6) x 10 7 ncm -2 s -1 . Analyzing by means of curve fitting method could be concluded that the neutron flux distribution on radial piercing beam port has profiled as a polynomial curve. (author)

  15. Effect of feed flow pattern on the distribution of permeate fluxes in desalination by direct contact membrane distillation

    KAUST Repository

    Soukane, Sofiane

    2017-05-31

    The current study aims to highlight the effect of flow pattern on the variations of permeate fluxes over the membrane surface during desalination in a direct contact membrane distillation (DCMD) flat module. To do so, a three dimensional (3D) Computational Fluid Dynamics (CFD) model with embedded pore scale calculations is implemented to predict flow, heat and mass transfer in the DCMD module. Model validation is carried out in terms of average permeate fluxes with experimental data of seawater desalination using two commercially available PTFE membranes. Average permeate fluxes agree within 6% and less with experimental values without fitting parameters. Simulation results show that the distribution of permeate fluxes and seawater salinity over the membrane surface are strongly dependent on momentum and heat transport and that temperature and concentration polarization follow closely the flow distribution. The analysis reveals a drastic effect of recirculation loops and dead zones on module performance and recommendations to improve MD flat module design are drawn consequently.

  16. Physics of Intrinsic Rotation in Flux-Driven ITG Turbulence

    International Nuclear Information System (INIS)

    Ku, S.; Abiteboul, J.; Dimond, P.H.; Dif-Pradalier, G.; Kwon, J.M.; Sarazin, Y.; Hahm, T.S.; Garbet, X.; Chang, C.S.; Latu, G.; Yoon, E.S.; Ghendrih, Ph.; Yi, S.; Strugarek, A.; Solomon, W.; Grandgirard, V.

    2012-01-01

    Global, heat flux-driven ITG gyrokinetic simulations which manifest the formation of macroscopic, mean toroidal flow profiles with peak thermal Mach number 0.05, are reported. Both a particle-in-cell (XGC1p) and a semi-Lagrangian (GYSELA) approach are utilized without a priori assumptions of scale-separation between turbulence and mean fields. Flux-driven ITG simulations with different edge flow boundary conditions show in both approaches the development of net unidirectional intrinsic rotation in the co-current direction. Intrinsic torque is shown to scale approximately linearly with the inverse scale length of the ion temperature gradient. External momentum input is shown to effectively cancel the intrinsic rotation profile, thus confirming the existence of a local residual stress and intrinsic torque. Fluctuation intensity, intrinsic torque and mean flow are demonstrated to develop inwards from the boundary. The measured correlations between residual stress and two fluctuation spectrum symmetry breakers, namely E x B shear and intensity gradient, are similar. Avalanches of (positive) heat flux, which propagate either outwards or inwards, are correlated with avalanches of (negative) parallel momentum flux, so that outward transport of heat and inward transport of parallel momentum are correlated and mediated by avalanches. The probability distribution functions of the outward heat flux and the inward momentum flux show strong structural similarity

  17. DETECTION OF FLUX EMERGENCE, SPLITTING, MERGING, AND CANCELLATION OF NETWORK FIELD. I. SPLITTING AND MERGING

    Energy Technology Data Exchange (ETDEWEB)

    Iida, Y.; Yokoyama, T. [Department of Earth and Planetary Science, University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Hagenaar, H. J. [Lockheed Martin Advanced Technology Center, Org. ADBS, Building 252, 3251 Hanover Street, Palo Alto, CA 94304 (United States)

    2012-06-20

    Frequencies of magnetic patch processes on the supergranule boundary, namely, flux emergence, splitting, merging, and cancellation, are investigated through automatic detection. We use a set of line-of-sight magnetograms taken by the Solar Optical Telescope (SOT) on board the Hinode satellite. We found 1636 positive patches and 1637 negative patches in the data set, whose time duration is 3.5 hr and field of view is 112'' Multiplication-Sign 112''. The total numbers of magnetic processes are as follows: 493 positive and 482 negative splittings, 536 positive and 535 negative mergings, 86 cancellations, and 3 emergences. The total numbers of emergence and cancellation are significantly smaller than those of splitting and merging. Further, the frequency dependence of the merging and splitting processes on the flux content are investigated. Merging has a weak dependence on the flux content with a power-law index of only 0.28. The timescale for splitting is found to be independent of the parent flux content before splitting, which corresponds to {approx}33 minutes. It is also found that patches split into any flux contents with the same probability. This splitting has a power-law distribution of the flux content with an index of -2 as a time-independent solution. These results support that the frequency distribution of the flux content in the analyzed flux range is rapidly maintained by merging and splitting, namely, surface processes. We suggest a model for frequency distributions of cancellation and emergence based on this idea.

  18. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  19. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1990-12-01

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  20. Constructing probability distributions of uncertain variables in models of the performance of the Waste Isolation Pilot Plant: The 1990 performance simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, M S

    1990-12-15

    A five-step procedure was used in the 1990 performance simulations to construct probability distributions of the uncertain variables appearing in the mathematical models used to simulate the Waste Isolation Pilot Plant's (WIPP's) performance. This procedure provides a consistent approach to the construction of probability distributions in cases where empirical data concerning a variable are sparse or absent and minimizes the amount of spurious information that is often introduced into a distribution by assumptions of nonspecialists. The procedure gives first priority to the professional judgment of subject-matter experts and emphasizes the use of site-specific empirical data for the construction of the probability distributions when such data are available. In the absence of sufficient empirical data, the procedure employs the Maximum Entropy Formalism and the subject-matter experts' subjective estimates of the parameters of the distribution to construct a distribution that can be used in a performance simulation. (author)

  1. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  2. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  3. Critical heat fluxes and liquid distribution in annular channels in the dispersion-annular flow

    International Nuclear Information System (INIS)

    Boltenko, Eh.A.; Pomet'ko, R.S.

    1984-01-01

    On the basis of using the dependence of intensity of total mass transfer between the flux nucleus and wall film obtained for tubes with uniform heat release and taking into account the peculiarities of mass transfer between the flux nucleus and wall film in annular channels the technique for calculating the liquid distribution and critical capacity of annular channels with internal, external and bilateral heating at uniform and non-uniform heat release over the length is proposed. The calculation of annular channels critical capacity according to the suggested technique is performed. A satisfactory agreement of calculation results with the experimental data is attained

  4. A benchmark analysis of radiation flux distribution for Boron Neutron Capture Therapy of canine brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Moran, Jean M. [Univ. of Idaho, Idaho Falls, ID (United States)

    1992-02-01

    Calculations of radiation flux and dose distributions for Boron Neutron Capture Therapy (BNCT) of brain tumors are typically performed using sophisticated three-dimensional analytical models based on either a homogeneous approximation or a simplified few-region approximation to the actual highly-heterogeneous geometry of the irradiation volume. Such models should be validated by comparison with calculations using detailed models in which all significant macroscopic tissue heterogeneities and geometric structures are explicitly represented as faithfully as possible. This work describes a validation exercise for BNCT of canine brain tumors. Geometric measurements of the canine anatomical structures of interest for this work were performed by dissecting and examining two essentially identical Labrador Retriever heads. Chemical analyses of various tissue samples taken during the dissections were conducted to obtain measurements of elemental compositions for tissues of interest. The resulting geometry and tissue composition data were then used to construct a detailed heterogeneous calculational model of the Labrador Retriever head. Calculations of three-dimensional radiation flux distributions pertinent to BNCT were performed for the model using the TORT discrete-ordinates radiation transport code. The calculations were repeated for a corresponding volume-weighted homogeneous tissue model. Comparison of the results showed that the peak neutron and photon flux magnitudes were quite similar for the two models (within 5%), but that the spatial flux profiles were shifted in the heterogeneous model such that the fluxes in some locations away from the peak differed from the corresponding fluxes in the homogeneous model by as much as 10-20%. Differences of this magnitude can be therapeutically significant, emphasizing the need for proper validation of simplified treatment planning models.

  5. A benchmark analysis of radiation flux distribution for Boron Neutron Capture Therapy of canine brain tumors

    International Nuclear Information System (INIS)

    Moran, J.M.

    1992-02-01

    Calculations of radiation flux and dose distributions for Boron Neutron Capture Therapy (BNCT) of brain tumors are typically performed using sophisticated three-dimensional analytical models based on either a homogeneous approximation or a simplified few-region approximation to the actual highly-heterogeneous geometry of the irradiation volume. Such models should be validated by comparison with calculations using detailed models in which all significant macroscopic tissue heterogeneities and geometric structures are explicitly represented as faithfully as possible. This work describes a validation exercise for BNCT of canine brain tumors. Geometric measurements of the canine anatomical structures of interest for this work were performed by dissecting and examining two essentially identical Labrador Retriever heads. Chemical analyses of various tissue samples taken during the dissections were conducted to obtain measurements of elemental compositions for tissues of interest. The resulting geometry and tissue composition data were then used to construct a detailed heterogeneous calculational model of the Labrador Retriever head. Calculations of three-dimensional radiation flux distributions pertinent to BNCT were performed for the model using the TORT discrete-ordinates radiation transport code. The calculations were repeated for a corresponding volume-weighted homogeneous tissue model. Comparison of the results showed that the peak neutron and photon flux magnitudes were quite similar for the two models (within 5%), but that the spatial flux profiles were shifted in the heterogeneous model such that the fluxes in some locations away from the peak differed from the corresponding fluxes in the homogeneous model by as much as 10-20%. Differences of this magnitude can be therapeutically significant, emphasizing the need for proper validation of simplified treatment planning models

  6. The galactic contribution to IceCube's astrophysical neutrino flux

    Energy Technology Data Exchange (ETDEWEB)

    Denton, Peter B. [Niels Bohr International Academy, University of Copenhagen, The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen (Denmark); Marfatia, Danny [Department of Physics and Astronomy, University of Hawaii at Manoa, 2505 Correa Rd., Honolulu, HI 96822 (United States); Weiler, Thomas J., E-mail: peterbd1@gmail.com, E-mail: dmarf8@hawaii.edu, E-mail: tom.weiler@vanderbilt.edu [Department of Physics and Astronomy, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235 (United States)

    2017-08-01

    High energy neutrinos have been detected by IceCube, but their origin remains a mystery. Determining the sources of this flux is a crucial first step towards multi-messenger studies. In this work we systematically compare two classes of sources with the data: galactic and extragalactic. We assume that the neutrino sources are distributed according to a class of Galactic models. We build a likelihood function on an event by event basis including energy, event topology, absorption, and direction information. We present the probability that each high energy event with deposited energy E {sub dep}>60 TeV in the HESE sample is Galactic, extragalactic, or background. For Galactic models considered the Galactic fraction of the astrophysical flux has a best fit value of 1.3% and is <9.5% at 90% CL. A zero Galactic flux is allowed at <1σ.

  7. PERFORMANCE OPTIMIZATION OF LINEAR INDUCTION MOTOR BY EDDY CURRENT AND FLUX DENSITY DISTRIBUTION ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. MANNA

    2011-12-01

    Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.

  8. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks

    International Nuclear Information System (INIS)

    Zhuang Jiancang; Ogata, Yosihiko

    2006-01-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata et al., Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method

  9. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    Science.gov (United States)

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  10. Providing probability distributions for the causal pathogen of clinical mastitis using naive Bayesian networks

    NARCIS (Netherlands)

    Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.

    2009-01-01

    Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which

  11. Prompt neutrino fluxes in the atmosphere with PROSA parton distribution functions

    International Nuclear Information System (INIS)

    Garzelli, M.V.; Moch, S.; Placakyte, R.; Sigl, G.; Cooper-Sarkar, A.

    2016-11-01

    Effects on atmospheric prompt neutrino fluxes of present uncertainties affecting the nucleon composition are studied by using the PROSA fit to parton distribution functions (PDFs). The PROSA fit extends the precision of the PDFs to low x, which is the kinematic region of relevance for high-energy neutrino production, by taking into account LHCb data on charm and bottom hadroproduction. In the range of neutrino energies explored by present Very Large Volume Neutrino Telescopes, it is found that PDF uncertainties are far smaller with respect to those due to renormalization and factorization scale variation and to assumptions on the cosmic ray composition, which at present dominate and limit our knowledge of prompt neutrino fluxes. A discussion is presented on how these uncertainties affect the expected number of atmospheric prompt neutrino events in the analysis of high-energy events characterized by interaction vertices fully contained within the instrumented volume of the detector, performed by the IceCube collaboration.

  12. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  13. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  14. Using a spatially-distributed hydrologic biogeochemistry model with nitrogen transport to study the spatial variation of carbon stocks and fluxes in a Critical Zone Observatory

    Science.gov (United States)

    Shi, Y.; Eissenstat, D. M.; He, Y.; Davis, K. J.

    2017-12-01

    Most current biogeochemical models are 1-D and represent one point in space. Therefore, they cannot resolve topographically driven land surface heterogeneity (e.g., lateral water flow, soil moisture, soil temperature, solar radiation) or the spatial pattern of nutrient availability. A spatially distributed forest biogeochemical model with nitrogen transport, Flux-PIHM-BGC, has been developed by coupling a 1-D mechanistic biogeochemical model Biome-BGC (BBGC) with a spatially distributed land surface hydrologic model, Flux-PIHM, and adding an advection dominated nitrogen transport module. Flux-PIHM is a coupled physically based model, which incorporates a land-surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model, and is augmented by adding a topographic solar radiation module. Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as land surface heterogeneities caused by topography. In the coupled Flux-PIHM-BGC model, each Flux-PIHM model grid couples a 1-D BBGC model, while nitrogen is transported among model grids via surface and subsurface water flow. In each grid, Flux-PIHM provides BBGC with soil moisture, soil temperature, and solar radiation, while BBGC provides Flux-PIHM with spatially-distributed leaf area index. The coupled Flux-PIHM-BGC model has been implemented at the Susquehanna/Shale Hills Critical Zone Observatory. The model-predicted aboveground vegetation carbon and soil carbon distributions generally agree with the macro patterns observed within the watershed. The importance of abiotic variables (including soil moisture, soil temperature, solar radiation, and soil mineral nitrogen) in predicting aboveground carbon distribution is calculated using a random forest. The result suggests that the spatial pattern of aboveground carbon is controlled by the distribution of soil mineral nitrogen. A Flux-PIHM-BGC simulation

  15. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  16. Measurement and analysis of neutron flux distribution of STACY heterogeneous core by position sensitive proportional counter. Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Murazaki, Minoru; Uno, Yuichi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    We have measured neutron flux distribution around the core tank of STACY heterogeneous core by position sensitive proportional counter (PSPC) to develop the method to measure reactivity for subcritical systems. The neutron flux distribution data in the position accuracy of {+-}13 mm have been obtained in the range of uranium concentration of 50g/L to 210g/L both in critical and in subcritical state. The prompt neutron decay constant, {alpha}, was evaluated from the measurement data of pulsed neutron source experiments. We also calculated distribution of neutron flux and {sup 3}He reaction rates at the location of PSPC by using continuous energy Monte Carlo code MCNP. The measurement data was compared with the calculation results. As results of comparison, calculated values agreed generally with measurement data of PSPC with Cd cover in the region above half of solution height, but the difference between calculated value and measurement data was large in the region below half of solution height. On the other hand, calculated value agreed well with measurement data of PSPC without Cd cover. (author)

  17. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  18. Calculation of the radial and axial flux and power distribution for a CANDU 6 reactor with both the MCNP6 and Serpent codes

    International Nuclear Information System (INIS)

    Hussein, M.S.; Bonin, H.W.; Lewis, B.J.

    2014-01-01

    The most recent versions of the Monte Carlo-based probabilistic transport code MCNP6 and the continuous energy reactor physics burnup calculation code Serpent allow for a 3-D geometry calculation accounting for the detailed geometry without unit-cell homogenization. These two codes are used to calculate the axial and radial flux and power distributions for a CANDU6 GENTILLY-2 nuclear reactor core with 37-element fuel bundles. The multiplication factor, actual flux distribution and power density distribution were calculated by using a tally combination for MCNP6 and detector analysis for Serpent. Excellent agreement was found in the calculated flux and power distribution. The Serpent code is most efficient in terms of the computational time. (author)

  19. Calculation of the radial and axial flux and power distribution for a CANDU 6 reactor with both the MCNP6 and Serpent codes

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, M.S.; Bonin, H.W., E-mail: mohamed.hussein@rmc.ca, E-mail: bonin-h@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, ON (Canada); Lewis, B.J., E-mail: Brent.Lewis@uoit.ca [Univ. of Ontario Inst. of Tech., Faculty of Energy Systems and Nuclear Science, Oshawa, ON (Canada)

    2014-07-01

    The most recent versions of the Monte Carlo-based probabilistic transport code MCNP6 and the continuous energy reactor physics burnup calculation code Serpent allow for a 3-D geometry calculation accounting for the detailed geometry without unit-cell homogenization. These two codes are used to calculate the axial and radial flux and power distributions for a CANDU6 GENTILLY-2 nuclear reactor core with 37-element fuel bundles. The multiplication factor, actual flux distribution and power density distribution were calculated by using a tally combination for MCNP6 and detector analysis for Serpent. Excellent agreement was found in the calculated flux and power distribution. The Serpent code is most efficient in terms of the computational time. (author)

  20. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  1. Metabolic flux distributions in Corynebacterium glutamicum during growth and lysine overproduction. Reprinted from Biotechnology and Bioengineering, Vol. 41, Pp 633-646 (1993).

    Science.gov (United States)

    Vallino, J J; Stephanopoulos, G

    2000-03-20

    The two main contributions of this article are the solidification of Corynebacterium glutamicum biochemistry guided by bioreaction network analysis, and the determination of basal metabolic flux distributions during growth and lysine synthesis. Employed methodology makes use of stoichiometrically based mass balances to determine flux distributions in the C. glutamicum metabolic network. Presented are a brief description of the methodology, a thorough literature review of glutamic acid bacteria biochemistry, and specific results obtained through a combination of fermentation studies and analysis-directed intracellular assays. The latter include the findings of the lack of activity of glyoxylate shunt, and that phosphoenolpyruvate carboxylase (PPC) is the only anaplerotic reaction expressed in C. glutamicum cultivated on glucose minimal media. Network simplifications afforded by the above findings facilitated the determination of metabolic flux distributions under a variety of culture conditions and led to the following conclusions. Both the pentose phosphate pathway and PPC support significant fluxes during growth and lysine overproduction, and that flux partitioning at the glucosa-6-phosphate branch point does not appear to limit lysine synthesis. Copyright 1993 John Wiley & Sons, Inc.

  2. Determination of neutron flux distribution in an Am-Be irradiator using the MCNP.

    Science.gov (United States)

    Shtejer-Diaz, K; Zamboni, C B; Zahn, G S; Zevallos-Chávez, J Y

    2003-10-01

    A neutron irradiator has been assembled at IPEN facilities to perform qualitative-quantitative analysis of many materials using thermal and fast neutrons outside the nuclear reactor premises. To establish the prototype specifications, the neutron flux distribution and the absorbed dose rates were calculated using the MCNP computer code. These theoretical predictions then allow one to discuss the optimum irradiator design and its performance.

  3. Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain

    International Nuclear Information System (INIS)

    Muhammad, Nauman Malik

    2009-02-01

    Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more

  4. Distribution of flux vacua around singular points in Calabi-Yau moduli space

    International Nuclear Information System (INIS)

    Eguchi, Tohru; Tachikawa, Yuji

    2006-01-01

    We study the distribution of type-IIB flux vacua in the moduli space near various singular loci, e.g. conifolds, ADE singularities on P 1 , Argyres-Douglas point etc, using the Ashok-Douglas density det (R+ω). We find that the vacuum density is integrable around each of them, irrespective of the type of the singularities. We study in detail an explicit example of an Argyres-Douglas point embedded in a compact Calabi-Yau manifold

  5. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  6. Various models for pion probability distributions from heavy-ion collisions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.

    1998-01-01

    Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society

  7. Analytical models of probability distribution and excess noise factor of solid state photomultiplier signals with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2012-01-01

    Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.

  8. Inverse heat transfer analysis of a functionally graded fin to estimate time-dependent base heat flux and temperature distributions

    International Nuclear Information System (INIS)

    Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching

    2012-01-01

    Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.

  9. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  10. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  11. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  12. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  13. Dynamical Origin and Terrestrial Impact Flux of Large Near-Earth Asteroids

    Science.gov (United States)

    Nesvorný, David; Roig, Fernando

    2018-01-01

    Dynamical models of the asteroid delivery from the main belt suggest that the current impact flux of diameter D> 10 km asteroids on the Earth is ≃0.5–1 Gyr‑1. Studies of the Near-Earth Asteroid (NEA) population find a much higher flux, with ≃ 7 D> 10 km asteroid impacts per Gyr. Here we show that this problem is rooted in the application of impact probability of small NEAs (≃1.5 Gyr‑1 per object), whose population is well characterized, to large NEAs. In reality, large NEAs evolve from the main belt by different escape routes, have a different orbital distribution, and lower impact probabilities (0.8 ± 0.3 Gyr‑1 per object) than small NEAs. In addition, we find that the current population of two D> 10 km NEAs (Ganymed and Eros) is a slight fluctuation over the long-term average of 1.1+/- 0.5 D> 10 km NEAs in a steady state. These results have important implications for our understanding of the occurrence of the K/T-scale impacts on the terrestrial worlds.

  14. Neutronic characterization of cylindrical core of minor excess reactivity in the nuclear reactor IPEN/MB-01 from the measure of neutron flux distribution and its reactivity ratio

    Energy Technology Data Exchange (ETDEWEB)

    Bitelli, Ulysses d' Utra; Aredes, Vitor O.G.; Mura, Luiz E.C.; Santos, Diogo F. dos; Silva, Alexandre P. da, E-mail: ubitelli@ipen.br, E-mail: vitoraredes@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    When compared to a rectangular parallelepiped configuration the cylindrical configuration of a nuclear reactor core has a better neutron economy because in this configuration the probability of the neutron leakage is smaller, causing an increase in overall reactivity in the system to the same amount of fuel used. In this work we obtained a critical cylindrical configuration with the control rods 89.50% withdraw from the active region of the IPEN/MB-01 core. This is the cylindrical configuration minimum possible excess of reactivity. Thus we obtained a cylindrical configuration with a diameter of only 28 fuel rods with lowest possible excess of reactivity. For this purpose, 112 peripheral fuel rods are removed from standard reactor core (rectangular parallelepiped of 28x28 fuel rods). In this configuration the excesses of reactivity is approximated 279 pcm. From there, we characterize the neutron field by measuring the spatial distribution of the thermal and epithermal neutron flux for the reactor operating power of 83 watts measured by neutron noise analysis technique and 92.08± 0.07 watts measured by activation technique [10]. The values of thermal and epithermal neutron flux in different directions, axial, radial north-south and radial east-west, are obtained in the asymptotic region of the reactor core, away from the disturbances caused by the reflector and control bar, by irradiating thin gold foils infinitely diluted (1% Au - 99% Al) with and without (bare) cadmium cover. In addition to the distribution of neutron flux, the moderator temperature coefficient, the void coefficient, calibration of the control rods were measured. (author)

  15. Time domain structures in a colliding magnetic flux rope experiment

    Science.gov (United States)

    Tang, Shawn Wenjie; Gekelman, Walter; Dehaas, Timothy; Vincena, Steve; Pribyl, Patrick

    2017-10-01

    Electron phase-space holes, regions of positive potential on the scale of the Debye length, have been observed in auroras as well as in laboratory experiments. These potential structures, also known as Time Domain Structures (TDS), are packets of intense electric field spikes that have significant components parallel to the local magnetic field. In an ongoing investigation at UCLA, TDS were observed on the surface of two magnetized flux ropes produced within the Large Plasma Device (LAPD). A barium oxide (BaO) cathode was used to produce an 18 m long magnetized plasma column and a lanthanum hexaboride (LaB6) source was used to create 11 m long kink unstable flux ropes. Using two probes capable of measuring the local electric and magnetic fields, correlation analysis was performed on tens of thousands of these structures and their propagation velocities, probability distribution function and spatial distribution were determined. The TDS became abundant as the flux ropes collided and appear to emanate from the reconnection region in between them. In addition, a preliminary analysis of the permutation entropy and statistical complexity of the data suggests that the TDS signals may be chaotic in nature. Work done at the Basic Plasma Science Facility (BaPSF) at UCLA which is supported by DOE and NSF.

  16. Fieldable computer system for determining gamma-ray pulse-height distributions, flux spectra, and dose rates from Little Boy

    International Nuclear Information System (INIS)

    Moss, C.E.; Lucas, M.C.; Tisinger, E.W.; Hamm, M.E.

    1984-01-01

    Our system consists of a LeCroy 3500 data acquisition system with a built-in CAMAC crate and eight bismuth-germanate detectors 7.62 cm in diameter and 7.62 cm long. Gamma-ray pulse-height distributions are acquired simultaneously for up to eight positions. The system was very carefully calibrated and characterized from 0.1 to 8.3 MeV using gamma-ray spectra from a variety of radioactive sources. By fitting the pulse-height distributions from the sources with a function containing 17 parameters, we determined theoretical repsonse functions. We use these response functions to unfold the distributions to obtain flux spectra. A flux-to-dose-rate conversion curve based on the work of Dimbylow and Francis is then used to obtain dose rates. Direct use of measured spectra and flux-to-dose-rate curves to obtain dose rates avoids the errors that can arise from spectrum dependence in simple gamma-ray dosimeter instruments. We present some gamma-ray doses for the Little Boy assembly operated at low power. These results can be used to determine the exposures of the Hiroshima survivors and thus aid in the establishment of radation exposure limits for the nuclear industry

  17. Experimental data on heat flux distribution from a volumetrically heated pool with frozen boundaries

    International Nuclear Information System (INIS)

    Helle, Maria; Kymaelaeinen, Olli; Tuomisto, Harri

    1999-01-01

    The COPO II experiments are confirmatory experiments and a continuation project to the earlier COPO I experiments. As in COPO 1, a molten corium pool on the lower head of a RPV is simulated by a two - dimensional slice of it in linear scale 1:2. The corium is simulated by water-zinc sulfate solution with volumetric Joule heating. The heat flux distribution on the boundaries and the temperature distribution in the pool are measured. The major new feature in COPO II is the cooling arrangement which is based on circulation of liquid nitrogen on the outside of the pool boundaries. The use of liquid nitrogen leads to formation of ice on the inside of boundaries. Two geometrically different versions of the COPO II facility have been constructed: one with a tori-spherical bottom shape, simulating the RPV of a VVER-440 reactor as COPO I, and another one with semicircular bottom simulating a western PWR such as AP600. The modified Rayleigh number in the COPO II experiments corresponds to the one in a prototypic corium pool (∼ 10 15 ). This paper reports results from the COPO II-Lo and COPO II-AP experiments with homogenous pool. Results indicate that the upward heat fluxes are in agreement with the results of the COPO I experiments. Also, as expected, the time averaged upward heat flux profile was relatively flat. On the other hand, the heat fluxes at the side and bottom boundaries of the pool were slightly higher in COPO II-Lo than in COPO I. In COPO II-AP, the average heat transfer coefficients to the curved boundary were higher than predicted by Jahn's and Mayinger's correlation, but slightly lower than in BALI experiments. (authors)

  18. Measurement of thermal neutron distribution of flux in the fuel element cluster - Progress report

    International Nuclear Information System (INIS)

    Krcevinac, S.B.; Takac, S.M.

    1966-12-01

    Relative distribution of thermal neutron flux in the fuel element cluster (19 UO 2 rods with Zr-II cladding, D 2 O moderator and coolant) was measured by newly developed cell perturbation method. The obtained values of mean density ratios are compared to the results obtained by TER-I code using Amouyal - Benoist model [sr

  19. Flux distribution in phantom for biomedical use of beam-type thermal neutrons

    International Nuclear Information System (INIS)

    Aoki, Kazuhiko; Kobayashi, Tooru; Kanda, Keiji; Kimura, Itsuro

    1985-01-01

    For boron neutron capture therapy, the thermal neutron beam is worth using as therapeutic neutron irradiation without useless and unfavorable exposure of normal tissues around tumor and for microanalysis system to measure ppm-order 10 B concentrations in tissue and to search for the location of the metastasis of tumor. In the present study, the thermal neutron flux distribution in a phantom, when beam-type thermal neutrons were incident on it, was measured at the KUR Neutron Guide Tube. The measurements were carried out by two different methods using indium foil. The one is an ordinary foil activation technique by using the 115 In(n, γ) 116m 1 In reactions, while the other is to detect γ-rays from the 115 In(n, γ) 116m 2 In reactions during neutron irradiations with a handy-type Ge detector. The calculations with DOT 3.5 were performed to examine thermal neutron flux in the phantom for various beam size and phantom size. The experimental and calculated results are in good agreement and it is shown that the second type measurement has a potential for practical application as a new monitoring system of the thermal neutron flux in a living body for boron neutron capture therapy. (author)

  20. Updated determination of the solar neutrino fluxes from solar neutrino data

    Energy Technology Data Exchange (ETDEWEB)

    Bergström, Johannes [Departament d’Estructura i Constituents de la Matèria and Institut de Ciencies del Cosmos,Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Gonzalez-Garcia, M. C. [Departament d’Estructura i Constituents de la Matèria and Institut de Ciencies del Cosmos,Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Institució Catalana de Recerca i Estudis Avançats (ICREA) (Spain); C.N. Yang Institute for Theoretical Physics,State University of New York at Stony Brook, Stony Brook, NY 11794-3840 (United States); Maltoni, Michele [Instituto de Física Teórica UAM/CSIC,Calle de Nicolás Cabrera 13-15, Universidad Autónoma de Madrid,Cantoblanco, E-28049 Madrid (Spain); Peña-Garay, Carlos [Instituto de Física Corpuscular (IFIC), CSIC and Universitat de Valencia,Calle Catedrático José Beltrán, 2, E-46090 Paterna, Valencia (Spain); Serenelli, Aldo M. [Institut de Ciencies de l’Espai (ICE-CSIC/IEEC),Campus UAB, Carrer de Can Magrans s/n, 08193 Cerdanyola del Valls (Spain); Song, Ningqiang [C.N. Yang Institute for Theoretical Physics,State University of New York at Stony Brook, Stony Brook, NY 11794-3840 (United States)

    2016-03-18

    We present an update of the determination of the solar neutrino fluxes from a global analysis of the solar and terrestrial neutrino data in the framework of three-neutrino mixing. Using a Bayesian analysis we reconstruct the posterior probability distribution function for the eight normalization parameters of the solar neutrino fluxes plus the relevant masses and mixing, with and without imposing the luminosity constraint. We then use these results to compare the description provided by different Standard Solar Models. Our results show that, at present, both models with low and high metallicity can describe the data with equivalent statistical agreement. We also argue that even with the present experimental precision the solar neutrino data have the potential to improve the accuracy of the solar model predictions.

  1. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  2. Evaluating the suitability of wind speed probability distribution models: A case of study of east and southeast parts of Iran

    International Nuclear Information System (INIS)

    Alavi, Omid; Mohammadi, Kasra; Mostafaeipour, Ali

    2016-01-01

    Highlights: • Suitability of different wind speed probability functions is assessed. • 5 stations distributed in east and south-east of Iran are considered as case studies. • Nakagami distribution is tested for first time and compared with 7 other functions. • Due to difference in wind features, best function is not similar for all stations. - Abstract: Precise information of wind speed probability distribution is truly significant for many wind energy applications. The objective of this study is to evaluate the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran. Nakagami distribution function is utilized for the first time to estimate the distribution of wind speed. The performance of Nakagami function is compared with seven typically used distribution functions. The achieved results reveal that the more effective function is not similar among all stations. Wind speed characteristics, quantity and quality of the recorded wind speed data can be considered as influential parameters on the performance of the distribution functions. Also, the skewness of the recorded wind speed data may have influence on the accuracy of the Nakagami distribution. For Chabahar and Khaf stations the Nakagami distribution shows the highest performance while for Lutak, Rafsanjan and Zabol stations the Gamma, Generalized Extreme Value and Inverse-Gaussian distributions offer the best fits, respectively. Based on the analysis, the Nakagami distribution can generally be considered as an effective distribution since it provides the best fits in 2 stations and ranks 3rd to 5th in the remaining stations; however, due to the close performance of the Nakagami and Weibull distributions and also flexibility of the Weibull function as its widely proven feature, more assessments on the performance of the Nakagami distribution are required.

  3. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  4. Transport of Internetwork Magnetic Flux Elements in the Solar Photosphere

    Science.gov (United States)

    Agrawal, Piyush; Rast, Mark P.; Gošić, Milan; Bellot Rubio, Luis R.; Rempel, Matthias

    2018-02-01

    The motions of small-scale magnetic flux elements in the solar photosphere can provide some measure of the Lagrangian properties of the convective flow. Measurements of these motions have been critical in estimating the turbulent diffusion coefficient in flux-transport dynamo models and in determining the Alfvén wave excitation spectrum for coronal heating models. We examine the motions of internetwork flux elements in Hinode/Narrowband Filter Imager magnetograms and study the scaling of their mean squared displacement and the shape of their displacement probability distribution as a function of time. We find that the mean squared displacement scales super-diffusively with a slope of about 1.48. Super-diffusive scaling has been observed in other studies for temporal increments as small as 5 s, increments over which ballistic scaling would be expected. Using high-cadence MURaM simulations, we show that the observed super-diffusive scaling at short increments is a consequence of random changes in barycenter positions due to flux evolution. We also find that for long temporal increments, beyond granular lifetimes, the observed displacement distribution deviates from that expected for a diffusive process, evolving from Rayleigh to Gaussian. This change in distribution can be modeled analytically by accounting for supergranular advection along with granular motions. These results complicate the interpretation of magnetic element motions as strictly advective or diffusive on short and long timescales and suggest that measurements of magnetic element motions must be used with caution in turbulent diffusion or wave excitation models. We propose that passive tracer motions in measured photospheric flows may yield more robust transport statistics.

  5. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  6. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  7. Methane fluxes during the cold season: distribution and mass transfer in the snow cover of bogs

    Science.gov (United States)

    Smagin, A. V.; Shnyrev, N. A.

    2015-08-01

    Fluxes and profile distribution of methane in the snow cover and different landscape elements of an oligotrophic West-Siberian bog (Mukhrino Research Station, Khanty-Mansiisk autonomous district) have been studied during a cold season. Simple models have been proposed for the description of methane distribution in the inert snow layer, which combine the transport of the gas and a source of constant intensity on the soil surface. The formation rates of stationary methane profiles in the snow cover have been estimated (characteristic time of 24 h). Theoretical equations have been derived for the calculation of small emission fluxes from bogs to the atmosphere on the basis of the stationary profile distribution parameters, the snow porosity, and the effective methane diffusion coefficient in the snow layer. The calculated values of methane emission significantly (by 2-3 to several tens of times) have exceeded the values measured under field conditions by the closed chamber method (0.008-0.25 mg C/(m2 h)), which indicates the possibility of underestimating the contribution of the cold period to the annual emission cycle of bog methane.

  8. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  9. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  10. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  11. Continuous-energy adjoint flux and perturbation calculation using the iterated fission probability method in Monte-Carlo code TRIPOLI-4 and underlying applications

    International Nuclear Information System (INIS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.

    2013-01-01

    The first goal of this paper is to present an exact method able to precisely evaluate very small reactivity effects with a Monte Carlo code (<10 pcm). it has been decided to implement the exact perturbation theory in TRIPOLI-4 and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4 is described. To illustrate the efficiency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the 'direct' estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the 'direct' method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. It offers the possibility to split reactivity contributions on both isotopes and reactions. Other applications of this perturbation method are presented and tested like the calculation of exact kinetic parameters (βeff, Λeff) or sensitivity parameters

  12. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  13. The probability distribution of extreme precipitation

    Science.gov (United States)

    Korolev, V. Yu.; Gorshenin, A. K.

    2017-12-01

    On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher-Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.

  14. Probabilistic model for fluences and peak fluxes of solar energetic particles

    International Nuclear Information System (INIS)

    Nymmik, R.A.

    1999-01-01

    The model is intended for calculating the probability for solar energetic particles (SEP), i.e., protons and Z=2-28 ions, to have an effect on hardware and on biological and other objects in the space. The model describes the probability for the ≥10 MeV/nucleon SEP fluences and peak fluxes to occur in the near-Earth space beyond the Earth magnetosphere under varying solar activity. The physical prerequisites of the model are as follows. The occurrence of SEP is a probabilistic process. The mean SEP occurrence frequency is a power-law function of solar activity (sunspot number). The SEP size (taken to be the ≥30 MeV proton fluence size) distribution is a power-law function within a 10 5 -10 11 proton/cm 2 range. The SEP event particle energy spectra are described by a common function whose parameters are distributed log-normally. The SEP mean composition is energy-dependent and suffers fluctuations described by log-normal functions in separate events

  15. Measurement of angular distribution of neutron flux for the 6 MeV race-track microtron based pulsed neutron source

    Energy Technology Data Exchange (ETDEWEB)

    Patil, B.J., E-mail: bjp@physics.unipune.ernet.i [Department of Physics, University of Pune, Pune 411 007 (India); Chavan, S.T.; Pethe, S.N.; Krishnan, R. [SAMEER, IIT Powai Campus, Mumbai 400 076 (India); Dhole, S.D., E-mail: sanjay@physics.unipune.ernet.i [Department of Physics, University of Pune, Pune 411 007 (India)

    2010-09-15

    The 6 MeV race track microtron based pulsed neutron source has been designed specifically for the elemental analysis of short lived activation products, where the low neutron flux requirement is desirable. Electrons impinges on a e-{gamma} target to generate bremsstrahlung radiations, which further produces neutrons by photonuclear reaction in {gamma}-n target. The optimisation of these targets along with their spectra were estimated using FLUKA code. The measurement of neutron flux was carried out by activation of vanadium at different scattering angles. Angular distribution of neutron flux indicates that the flux decreases with increase in the angle and are in good agreement with the FLUKA simulation.

  16. Snow-melt flood frequency analysis by means of copula based 2D probability distributions for the Narew River in Poland

    Directory of Open Access Journals (Sweden)

    Bogdan Ozga-Zielinski

    2016-06-01

    New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.

  17. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  18. A DETERMINATION OF THE FLUX DENSITY IN CORE OF DISTRIBUTION TRANSFORMERS, WHAT BUILT WITH THE COMMON USING OF GRAIN AND NON GRAIN ORIENTED MAGNETIC STEELS

    Directory of Open Access Journals (Sweden)

    I.V. Pentegov

    2015-12-01

    Full Text Available Purpose. The development of calculation method to determinate the flux densities in different parts of the magnetic cores of distribution transformers, what built from different types magnetic steel (mixed core. Methodology. The method is based on the scientific positions of Theoretical Electrical Engineering – the theory of the electromagnetic field in nonlinear mediums to determine the distribution of magnetic flux in mixed core of transformer, what are using different types of steel what have the different magnetic properties. Results. The developed method gives possible to make calculation of the flux density and influence of skin effect in different parts of the magnetic cores of distribution transformer, where are used mix of grain oriented (GO and non grain oriented (NGO steels. Was determinate the general basic conditions for the calculation of flux density in the laminations from grain and non grain oriented steels of the magnetic core: the strength of magnetic field for the laminations of particular part of mixed core is the same; the sum of the magnetic fluxes in GO and NGO steels in particular part of mixed core is equal with the designed magnetic flux in this part of mixed core. Discover, the magnetic flux in mixed core of the transformer has specific distribution between magnetic steels. The flux density is higher in laminations from GO steel and smaller in laminations from the NGO steel. That is happened because for magnetic flux is easier pass through laminations from GO steel, what has better magnetic conductance than laminations from NGO steel. Originality. The common using of different types of magnetic steels in cores for distribution transformers gives possibility to make design of transformer with low level of no load losses, high efficiency and with optimal cost. Practical value. The determination of the flux density in different parts of magnetic core with GO and NGO steels gives possibility make accurate calculation of

  19. Probability distribution of dose rates in the body tissue as a function of the rhytm of Sr90 administration and the age of animals

    International Nuclear Information System (INIS)

    Rasin, I.M.; Sarapul'tsev, I.A.

    1975-01-01

    The probability distribution of tissue radiation doses in the skeleton were studied in experiments on swines and dogs. When introducing Sr-90 into the organism from the day of birth till 90 days dose rate probability distribution is characterized by one, or, for adult animals, by two independent aggregates. Each of these aggregates correspond to the normal distribution law

  20. Analysis of Neutron Flux Distribution in Rsg-Gas Reactor With U-Mo Fuels

    Directory of Open Access Journals (Sweden)

    Taswanda Taryo

    2004-01-01

    Full Text Available The use of U-Mo fuels in research reactors seems to be promising and, recently, world researchers have carried out these such activities actively. The National Nuclear Energy Agency (BATAN which owns RSG-GAS reactor available in Serpong Research Center for Atomic Energy should anticipate this trend. It is, therefore, this research work on the use of U-Mo fuels in RSG-GAS reactor should be carried out. The work was focused on the analysis of neutron flux distribution in the RSG-GAS reactor using different content of molybdenum in U-Mo fuels. To begin with, RSG-GAS reactor core model was developed and simulated into X, Y and Z dimensions. Cross section of materials based on the developed cells of standard and control fuels was then generated using WIMS-D5-B. The criticality calculations were finally carried out applying BATAN-2DIFF code. The results showed that the neutron flux distribution obtained in U-Mo-fuel-based RSG-GAS core is very similar to those achieved in the 300-gram sillicide-fuel-based RSG-GAS reactor core. Indeed, the utilization of the U-Mo RSG-GAS core can be very similar to that of the high-density sillicide reactor core and even could be better in the future.

  1. Flux distribution by neutrons semi-conductors detectors during the startup of the EL4 reactor

    International Nuclear Information System (INIS)

    Fuster, S.; Tarabella, A.

    1967-01-01

    The Cea developed neutron semi-conductors detectors which allows a quasi-instantaneous monitoring of neutrons flux distribution, when placed in a reactor during the tests. These detectors have been experimented in the EL4 reactor. The experiment and the results are presented and compared with reference mappings. (A.L.B.)

  2. Assessing the Adequacy of Probability Distributions for Estimating the Extreme Events of Air Temperature in Dabaa Region

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2015-01-01

    Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October

  3. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  4. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400, La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)

    2009-09-15

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  5. A probability space for quantum models

    Science.gov (United States)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  6. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Senkpeil, Ryan R. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Tlatov, Andrey G. [Kislovodsk Mountain Astronomical Station of the Pulkovo Observatory, Kislovodsk 357700 (Russian Federation); Nagovitsyn, Yury A. [Pulkovo Astronomical Observatory, Russian Academy of Sciences, St. Petersburg 196140 (Russian Federation); Pevtsov, Alexei A. [National Solar Observatory, Sunspot, NM 88349 (United States); Chapman, Gary A.; Cookson, Angela M. [San Fernando Observatory, Department of Physics and Astronomy, California State University Northridge, Northridge, CA 91330 (United States); Yeates, Anthony R. [Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE (United Kingdom); Watson, Fraser T. [National Solar Observatory, Tucson, AZ 85719 (United States); Balmaceda, Laura A. [Institute for Astronomical, Terrestrial and Space Sciences (ICATE-CONICET), San Juan (Argentina); DeLuca, Edward E. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Martens, Petrus C. H., E-mail: munoz@solar.physics.montana.edu [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States)

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  7. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  8. Satellite-Derived Distributions, Inventories and Fluxes of Dissolved and Particulate Organic Matter Along the Northeastern U.S. Continental Margin

    Science.gov (United States)

    Mannino, A.; Hooker, S. B.; Hyde, K.; Novak, M. G.; Pan, X.; Friedrichs, M.; Cahill, B.; Wilkin, J.

    2011-01-01

    Estuaries and the coastal ocean experience a high degree of variability in the composition and concentration of particulate and dissolved organic matter (DOM) as a consequence of riverine and estuarine fluxes of terrigenous DOM, sediments, detritus and nutrients into coastal waters and associated phytoplankton blooms. Our approach integrates biogeochemical measurements, optical properties and remote sensing to examine the distributions and inventories of organic carbon in the U.S. Middle Atlantic Bight and Gulf of Maine. Algorithms developed to retrieve colored DOM (CDOM), Dissolved (DOC) and Particulate Organic Carbon (POC) from NASA's MODIS-Aqua and SeaWiFS satellite sensors are applied to quantify the distributions and inventories of DOC and POC. Horizontal fluxes of DOC and POC from the continental margin to the open ocean are estimated from SeaWiFS and MODIS-Aqua distributions of DOC and POC and horizontal divergence fluxes obtained from the Northeastern North Atlantic ROMS model. SeaWiFS and MODIS imagery reveal the importance of estuarine outflow to the export of CDOM and DOC to the coastal ocean and a net community production of DOC on the shelf.

  9. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  10. Disadvantage factors for square lattice cells using a collision probability method

    International Nuclear Information System (INIS)

    Raghav, H.P.

    1976-01-01

    The flux distribution in an infinite square lattice consisting of cylindrical fuel rods and moderator is calculated by using a collision probability method. Neutrons are assumed to be monoenergetic and the sources as well as scattering are assumed to be isotropic. Carlvik's method for the calculation of collision probability is used. The important features of the method are that the square boundary is treated exactly and the contribution of the surrounding cells is calculated explicitly. The method is programmed in a computer code CELLC. This carries out integration by Simpson's rule. The convergence and accuracy of CELLC is assessed by computing disadvantage factors for the well-known Thie lattices and comparing the results with Monte Carlo and other integral transport theory methods used elsewhere. It is demonstrated that it is not correct to apply the white boundary condition in the Wigner Seitz Cell for low pitch and low cross sections. (orig.) [de

  11. A general ray-tracing algorithm for the solution of the neutron transport equation by the collision probability method

    International Nuclear Information System (INIS)

    Ball, G.

    1990-01-01

    The development and analysis of methods for generating first-flight collision probabilities in two-dimensional geometries consistent with Light Water Moderated (LWR) fuel assemblies are examined. A new ray-tracing algorithm is discussed. A number of numerical results are given demonstrating the feasibility of this algorithm and the effects of the moderator (and fuel) sectorizations on the resulting flux distributions. The collision probabilties have been introduced and their subsequent utilization in the flux calculation procedures illustrated. A brief description of the Coxy-1 and Coxy-2 programs (which were developed in the Reactor Theory Division of the Atomic Energy Agency of South Africa Ltd) has also been added. 41 figs., 9 tabs., 18 refs

  12. p-adic probability interpretation of Bell's inequality

    International Nuclear Information System (INIS)

    Khrennikov, A.

    1995-01-01

    We study the violation of Bell's inequality using a p-adic generalization of the theory of probability. p-adic probability is introduced as a limit of relative frequencies but this limit exists with respect to a p-adic metric. In particular, negative probability distributions are well defined on the basis of the frequency definition. This new type of stochastics can be used to describe hidden-variables distributions of some quantum models. If the hidden variables have a p-adic probability distribution, Bell's inequality is not valid and it is not necessary to discuss the experimental violations of this inequality. ((orig.))

  13. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... of the station’s infrastructure layout and plan of operation. However, these three methods do not take the timetable at the station into consideration. Therefore, two methods are introduced in this paper, making it possible to estimate the robustness of different timetables at a station or different...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time...

  14. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    Science.gov (United States)

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity

  15. Invariability of Central Metabolic Flux Distribution in Shewanella oneidensis MR-1 Under Environmental or Genetic Perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yinjie; Martin, Hector Garcia; Deutschbauer, Adam; Feng, Xueyang; Huang, Rick; Llora, Xavier; Arkin, Adam; Keasling, Jay D.

    2009-04-21

    An environmentally important bacterium with versatile respiration, Shewanella oneidensis MR-1, displayed significantly different growth rates under three culture conditions: minimal medium (doubling time {approx} 3 hrs), salt stressed minimal medium (doubling time {approx} 6 hrs), and minimal medium with amino acid supplementation (doubling time {approx}1.5 hrs). {sup 13}C-based metabolic flux analysis indicated that fluxes of central metabolic reactions remained relatively constant under the three growth conditions, which is in stark contrast to the reported significant changes in the transcript and metabolite profiles under various growth conditions. Furthermore, ten transposon mutants of S. oneidensis MR-1 were randomly chosen from a transposon library and their flux distributions through central metabolic pathways were revealed to be identical, even though such mutational processes altered the secondary metabolism, for example, glycine and C1 (5,10-Me-THF) metabolism.

  16. Persistent current and transmission probability in the Aharonov-Bohm ring with an embedded quantum dot

    International Nuclear Information System (INIS)

    Wu Suzhi; Li Ning; Jin Guojun; Ma Yuqiang

    2008-01-01

    Persistent current and transmission probability in the Aharonov-Bohm (AB) ring with an embedded quantum dot (QD) are studied using the technique of the scattering matrix. For the first time, we find that the persistent current can arise in the absence of magnetic flux in the ring with an embedded QD. The persistent current and the transmission probability are sensitive to the lead-ring coupling and the short-range potential barrier. It is shown that increasing the lead-ring coupling or the short-range potential barrier causes the suppression of the persistent current and the increasing resonance width of the transmission probability. The effect of the potential barrier on the number of the transmission peaks is also investigated. The dependence of the persistent current and the transmission probability on the magnetic flux exhibits a periodic property with period of the flux quantum

  17. Stochastic Economic Dispatch with Wind using Versatile Probability Distribution and L-BFGS-B Based Dual Decomposition

    DEFF Research Database (Denmark)

    Huang, Shaojun; Sun, Yuanzhang; Wu, Qiuwei

    2018-01-01

    This paper focuses on economic dispatch (ED) in power systems with intermittent wind power, which is a very critical issue in future power systems. A stochastic ED problem is formed based on the recently proposed versatile probability distribution (VPD) of wind power. The problem is then analyzed...

  18. Introduction and application of non-stationary standardized precipitation index considering probability distribution function and return period

    Science.gov (United States)

    Park, Junehyeong; Sung, Jang Hyun; Lim, Yoon-Jin; Kang, Hyun-Suk

    2018-05-01

    The widely used meteorological drought index, the Standardized Precipitation Index (SPI), basically assumes stationarity, but recent changes in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process was proposed. The results were evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered that the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite that these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the probability distribution wider than before. This implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.

  19. Potential and flux field landscape theory. I. Global stability and dynamics of spatially dependent non-equilibrium systems.

    Science.gov (United States)

    Wu, Wei; Wang, Jin

    2013-09-28

    We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is

  20. Analyses of moments in pseudorapidity intervals at √s = 546 GeV by means of two probability distributions in pure-birth process

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Suzuki, N.

    1988-01-01

    Moments in pseudorapidity intervals at the CERN Sp-barpS collider (√s = 546 GeV) are analyzed by means of two probability distributions in the pure-birth stochastic process. Our results show that a probability distribution obtained from the Poisson distribution as an initial condition is more useful than that obtained from the Kronecker δ function. Analyses of moments by Koba-Nielsen-Olesen scaling functions derived from solutions of the pure-birth stochastic process are also made. Moreover, analyses of preliminary data at √s = 200 and 900 GeV are added

  1. Probability distribution of magnetization in the one-dimensional Ising model: effects of boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Antal, T [Physics Department, Simon Fraser University, Burnaby, BC V5A 1S6 (Canada); Droz, M [Departement de Physique Theorique, Universite de Geneve, CH 1211 Geneva 4 (Switzerland); Racz, Z [Institute for Theoretical Physics, Eoetvoes University, 1117 Budapest, Pazmany setany 1/a (Hungary)

    2004-02-06

    Finite-size scaling functions are investigated both for the mean-square magnetization fluctuations and for the probability distribution of the magnetization in the one-dimensional Ising model. The scaling functions are evaluated in the limit of the temperature going to zero (T {yields} 0), the size of the system going to infinity (N {yields} {infinity}) while N[1 - tanh(J/k{sub B}T)] is kept finite (J being the nearest neighbour coupling). Exact calculations using various boundary conditions (periodic, antiperiodic, free, block) demonstrate explicitly how the scaling functions depend on the boundary conditions. We also show that the block (small part of a large system) magnetization distribution results are identical to those obtained for free boundary conditions.

  2. Understanding the distinctively skewed and heavy tailed character of atmospheric and oceanic probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Sardeshmukh, Prashant D., E-mail: Prashant.D.Sardeshmukh@noaa.gov [CIRES, University of Colorado, Boulder, Colorado 80309 (United States); NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States); Penland, Cécile [NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States)

    2015-03-15

    The probability distributions of large-scale atmospheric and oceanic variables are generally skewed and heavy-tailed. We argue that their distinctive departures from Gaussianity arise fundamentally from the fact that in a quadratically nonlinear system with a quadratic invariant, the coupling coefficients between system components are not constant but depend linearly on the system state in a distinctive way. In particular, the skewness arises from a tendency of the system trajectory to linger near states of weak coupling. We show that the salient features of the observed non-Gaussianity can be captured in the simplest such nonlinear 2-component system. If the system is stochastically forced and linearly damped, with one component damped much more strongly than the other, then the strongly damped fast component becomes effectively decoupled from the weakly damped slow component, and its impact on the slow component can be approximated as a stochastic noise forcing plus an augmented nonlinear damping. In the limit of large time-scale separation, the nonlinear augmentation of the damping becomes small, and the noise forcing can be approximated as an additive noise plus a correlated additive and multiplicative noise (CAM noise) forcing. Much of the diversity of observed large-scale atmospheric and oceanic probability distributions can be interpreted in this minimal framework.

  3. Towards a theoretical determination of the geographical probability distribution of meteoroid impacts on Earth

    Science.gov (United States)

    Zuluaga, Jorge I.; Sucerquia, Mario

    2018-06-01

    Tunguska and Chelyabinsk impact events occurred inside a geographical area of only 3.4 per cent of the Earth's surface. Although two events hardly constitute a statistically significant demonstration of a geographical pattern of impacts, their spatial coincidence is at least tantalizing. To understand if this concurrence reflects an underlying geographical and/or temporal pattern, we must aim at predicting the spatio-temporal distribution of meteoroid impacts on Earth. For this purpose we designed, implemented, and tested a novel numerical technique, the `Gravitational Ray Tracing' (GRT) designed to compute the relative impact probability (RIP) on the surface of any planet. GRT is inspired by the so-called ray-casting techniques used to render realistic images of complex 3D scenes. In this paper we describe the method and the results of testing it at the time of large impact events. Our findings suggest a non-trivial pattern of impact probabilities at any given time on the Earth. Locations at 60-90° from the apex are more prone to impacts, especially at midnight. Counterintuitively, sites close to apex direction have the lowest RIP, while in the antapex RIP are slightly larger than average. We present here preliminary maps of RIP at the time of Tunguska and Chelyabinsk events and found no evidence of a spatial or temporal pattern, suggesting that their coincidence was fortuitous. We apply the GRT method to compute theoretical RIP at the location and time of 394 large fireballs. Although the predicted spatio-temporal impact distribution matches marginally the observed events, we successfully predict their impact speed distribution.

  4. Multiscale probability distribution of pressure fluctuations in fluidized beds

    International Nuclear Information System (INIS)

    Ghasemi, Fatemeh; Sahimi, Muhammad; Reza Rahimi Tabar, M; Peinke, Joachim

    2012-01-01

    Analysis of flow in fluidized beds, a common chemical reactor, is of much current interest due to its fundamental as well as industrial importance. Experimental data for the successive increments of the pressure fluctuations time series in a fluidized bed are analyzed by computing a multiscale probability density function (PDF) of the increments. The results demonstrate the evolution of the shape of the PDF from the short to long time scales. The deformation of the PDF across time scales may be modeled by the log-normal cascade model. The results are also in contrast to the previously proposed PDFs for the pressure fluctuations that include a Gaussian distribution and a PDF with a power-law tail. To understand better the properties of the pressure fluctuations, we also construct the shuffled and surrogate time series for the data and analyze them with the same method. It turns out that long-range correlations play an important role in the structure of the time series that represent the pressure fluctuation. (paper)

  5. Calculation of neutron flux distribution of thermal neutrons from microtron converter in a graphite moderator with water reflector

    International Nuclear Information System (INIS)

    Andrejsek, K.

    1977-01-01

    The calculation is made of the thermal neutron flux in the moderator and reflector by solving the neutron diffusion equation using the four-group theory. The correction for neutron absorption in the moderator was carried out using the perturbation theory. The calculation was carried out for four groups with the following energy ranges: the first group 2 MeV to 3 keV, the second group 3 keV to 5 eV, the third group 5 eV to 0.025 eV and the fourth group 0.025 eV. The values of the macroscopic cross section of capture and scattering, of the diffusion coefficient, the macroscopic cross section of the moderator, of the neutron age and the extrapolation length for the water-graphite moderator used in the calculations are given. The spatial distribution of the thermal neutron flux is graphically represented for graphite of a 30, 40, and 50 cm radius and for graphite of a 30 and 40 cm radius with a 10 cm water reflector; a graphic comparison is made of the distribution of the thermal neutron flux in water and in graphite, both 40 cm in radius. The system of graphite with reflector proved to be the best and most efficient system for raising the flux density of thermal neutrons. (J.P.)

  6. Distribution of flux-pinning energies in YBa2Cu3O(7-delta) and Bi2Sr2CaCu2O(8+delta) from flux noise

    Science.gov (United States)

    Ferrari, M. J.; Johnson, Mark; Wellstood, Frederick C.; Clarke, John; Mitzi, D.

    1990-01-01

    The spectral density of the magnetic flux noise measured in high-temperature superconductors in low magnetic fields scales approximately as the inverse of the frequency and increases with temperature. The temperature and frequency dependence of the noise are used to determine the pinning energies of individual flux vortices in thermal equilibrium. The distribution of pinning energies below 0.1 eV in YBa(2)Cu(3)O(7-delta) and near 0.2 eV in Bi(2)Sr(2)CaCu(2)O(8+delta). The noise power is proportional to the ambient magnetic field, indicating that the vortex motion is uncorrelated.

  7. Effect of Magnetic Flux Density and Applied Current on Temperature, Velocity and Entropy Generation Distributions in MHD Pumps

    Directory of Open Access Journals (Sweden)

    M. Kiyasatfar

    2011-01-01

    Full Text Available In the present study, simulation of steady state, incompressible and fully developed laminar flow has been conducted in a magneto hydrodynamic (MHD pump. The governing equations are solved numerically by finite-difference method. The effect of the magnetic flux density and current on the flow and temperature distributions in a MHD pump is investigated. The obtained results showed that controlling the flow and the temperature is possible through the controlling of the applied current and the magnetic flux. Furthermore, the effects of the magnetic flux density and current on entropy generation in MHD pump are considered. Our presented numerical results are in good agreement with the experimental data showed in literature.

  8. Apparatus for the measurement of the distribution of neutron flux (band and wire continuous recorders); Appareillage pour la mesure des repartitions de flux neutroniques (derouleurs a bandes et a fils)

    Energy Technology Data Exchange (ETDEWEB)

    Derrien, H; Lemerle, Y [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    The apparatus of which we describe the principles and the operation, makes it possible to know the thermal neutron flux distribution in a pile channel or along a fuel element by the measurement of the {beta} activity of continuous detectors. It is also possible thereby to show the presence of a very localised sudden discontinuity in the distribution curve. (authors) [French] Les appareils dont nous decrivons le principe et le fonctionnement, permettent de connaitre la repartition du flux de neutrons thermiques dans un canal de pile ou le long d'un element combustible, par la mesure de l'activite {beta} de detecteurs continus. Ils permettent en outre de mettre en evidence un accident brutal, tres localise, de la courbe de repartition. (auteurs)

  9. Combining scenarios in a calculation of the overall probability distribution of cumulative releases of radioactivity from the Waste Isolation Pilot Plant, southeastern New Mexico

    International Nuclear Information System (INIS)

    Tierney, M.S.

    1991-11-01

    The Waste Isolation Pilot Plant (WIPP), in southeastern New Mexico, is a research and development facility to demonstrate safe disposal of defense-generated transuranic waste. The US Department of Energy will designate WIPP as a disposal facility if it meets the US Environmental Protection Agency's standard for disposal of such waste; the standard includes a requirement that estimates of cumulative releases of radioactivity to the accessible environment be incorporated in an overall probability distribution. The WIPP Project has chosen an approach to calculation of an overall probability distribution that employs the concept of scenarios for release and transport of radioactivity to the accessible environment. This report reviews the use of Monte Carlo methods in the calculation of an overall probability distribution and presents a logical and mathematical foundation for use of the scenario concept in such calculations. The report also draws preliminary conclusions regarding the shape of the probability distribution for the WIPP system; preliminary conclusions are based on the possible occurrence of three events and the presence of one feature: namely, the events ''attempted boreholes over rooms and drifts,'' ''mining alters ground-water regime,'' ''water-withdrawal wells provide alternate pathways,'' and the feature ''brine pocket below room or drift.'' Calculation of the WIPP systems's overall probability distributions for only five of sixteen possible scenario classes that can be obtained by combining the four postulated events or features

  10. Nuclear data uncertainties: I, Basic concepts of probability

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.

    1988-12-01

    Some basic concepts of probability theory are presented from a nuclear-data perspective, in order to provide a foundation for thorough understanding of the role of uncertainties in nuclear data research. Topics included in this report are: events, event spaces, calculus of events, randomness, random variables, random-variable distributions, intuitive and axiomatic probability, calculus of probability, conditional probability and independence, probability distributions, binomial and multinomial probability, Poisson and interval probability, normal probability, the relationships existing between these probability laws, and Bayes' theorem. This treatment emphasizes the practical application of basic mathematical concepts to nuclear data research, and it includes numerous simple examples. 34 refs.

  11. Nuclear data uncertainties: I, Basic concepts of probability

    International Nuclear Information System (INIS)

    Smith, D.L.

    1988-12-01

    Some basic concepts of probability theory are presented from a nuclear-data perspective, in order to provide a foundation for thorough understanding of the role of uncertainties in nuclear data research. Topics included in this report are: events, event spaces, calculus of events, randomness, random variables, random-variable distributions, intuitive and axiomatic probability, calculus of probability, conditional probability and independence, probability distributions, binomial and multinomial probability, Poisson and interval probability, normal probability, the relationships existing between these probability laws, and Bayes' theorem. This treatment emphasizes the practical application of basic mathematical concepts to nuclear data research, and it includes numerous simple examples. 34 refs

  12. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  13. OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis

    Directory of Open Access Journals (Sweden)

    Nielsen Lars K

    2009-05-01

    Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and

  14. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  15. Probability distribution of wave packet delay time for strong overlapping of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshits, V.L.

    1983-01-01

    Time behaviour of nuclear reactions in the case of high level densities is investigated basing on the theory of overlapping resonances. In the framework of a model of n equivalent channels an analytical expression is obtained for the probability distribution function for wave packet delay time at the compound nucleus production. It is shown that at strong overlapping of the resonance levels the relative fluctuation of the delay time is small at the stage of compound nucleus production. A possible increase in the duration of nuclear reactions with the excitation energy rise is discussed

  16. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  17. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  18. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  19. β-distribution for Reynolds stress and turbulent heat flux in relaxation turbulent boundary layer of compression ramp

    Science.gov (United States)

    Hu, YanChao; Bi, WeiTao; Li, ShiYao; She, ZhenSu

    2017-12-01

    A challenge in the study of turbulent boundary layers (TBLs) is to understand the non-equilibrium relaxation process after sep-aration and reattachment due to shock-wave/boundary-layer interaction. The classical boundary layer theory cannot deal with the strong adverse pressure gradient, and hence, the computational modeling of this process remains inaccurate. Here, we report the direct numerical simulation results of the relaxation TBL behind a compression ramp, which reveal the presence of intense large-scale eddies, with significantly enhanced Reynolds stress and turbulent heat flux. A crucial finding is that the wall-normal profiles of the excess Reynolds stress and turbulent heat flux obey a β-distribution, which is a product of two power laws with respect to the wall-normal distances from the wall and from the boundary layer edge. In addition, the streamwise decays of the excess Reynolds stress and turbulent heat flux also exhibit power laws with respect to the streamwise distance from the corner of the compression ramp. These results suggest that the relaxation TBL obeys the dilation symmetry, which is a specific form of self-organization in this complex non-equilibrium flow. The β-distribution yields important hints for the development of a turbulence model.

  20. Feasibility study of Self Powered Neutron Detectors in Fast Reactors for detecting local change in neutron flux distribution

    International Nuclear Information System (INIS)

    Jammes, Christian; Filliatre, Philippe; Verma, Vasudha; Hellesen, Carl; Jacobsson Svard, Staffan

    2015-01-01

    Neutron flux monitoring system forms an integral part of the design of a Generation IV sodium cooled fast reactor system. Diverse possibilities of detector systems installation have to be investigated with respect to practicality and feasibility according to the detection parameters. In this paper, we demonstrate the feasibility of using self powered neutron detectors as in-core detectors in fast reactors for detecting local change in neutron flux distribution. We show that the gamma contribution from fission products decay in the fuel and activation of structural materials is very small compared to the fission gammas. Thus, it is possible for the in-core SPND signal to follow changes in local neutron flux as they are proportional to each other. This implies that the signal from an in-core SPND can provide dynamic information on the neutron flux perturbations occurring inside the reactor core. (authors)

  1. Feasibility study of Self Powered Neutron Detectors in Fast Reactors for detecting local change in neutron flux distribution

    Energy Technology Data Exchange (ETDEWEB)

    Jammes, Christian; Filliatre, Philippe [CEA, DEN, DER, Instrumentation Sensors and Dosimetry Laboratory, Cadarache, F-13108 St Paul-Lez-Durance, (France); Verma, Vasudha; Hellesen, Carl; Jacobsson Svard, Staffan [Division of Applied Nuclear Physics, Uppsala University, SE-75120 Uppsala, (Sweden)

    2015-07-01

    Neutron flux monitoring system forms an integral part of the design of a Generation IV sodium cooled fast reactor system. Diverse possibilities of detector systems installation have to be investigated with respect to practicality and feasibility according to the detection parameters. In this paper, we demonstrate the feasibility of using self powered neutron detectors as in-core detectors in fast reactors for detecting local change in neutron flux distribution. We show that the gamma contribution from fission products decay in the fuel and activation of structural materials is very small compared to the fission gammas. Thus, it is possible for the in-core SPND signal to follow changes in local neutron flux as they are proportional to each other. This implies that the signal from an in-core SPND can provide dynamic information on the neutron flux perturbations occurring inside the reactor core. (authors)

  2. High resolution isotope data and ensemble modelling reveal ecohydrological controls on catchment storage-discharge relationships and flux travel time distributions

    Science.gov (United States)

    Soulsby, C.; Kuppel, S.; Smith, A.; Tetzlaff, D.

    2017-12-01

    The dynamics of water storage in a catchment provides a fundamental insight into the interlinkages between input and output fluxes, and how these are affected by environmental change. Such dynamics also mediate, and help us understand, the fundamental difference of the rapid celerity of the rainfall-runoff (minutes to hours) response of catchments and the much slower velocity of water particles (months to decades) as they are transported through catchment systems. In this contribution we report an intensive, long-term (>10year), multi-scale isotope study in the Scottish Highlands that has sought to better understand these issues. We have integrated empirical data collection with diverse modelling approaches to quantify the dynamics and residence times of storage in different compartments of the hydrological system (vegetation canopies, soils, ground waters etc.) and their relationship between the magnitude and travel time distributions of output fluxes (stream flow, transpiration and evaporation). Use of conceptual, physically-based and probabilistic modelling approaches give broadly consistent perspectives on the storage-discharge relationships and the preferential selection of younger waters in runoff, evaporation and transpiration; while older waters predominate in groundwater. The work also highlighted the importance role vegetation plays in regulating fluxes in evaporation and transpiration and how this contributes to the differential ageing of water in mobile and bulk waters in the soil compartment. A separate case study shows how land use change can affect storage distributions in a catchment and radically change travel time distributions in output fluxes.

  3. The quantum probability calculus

    International Nuclear Information System (INIS)

    Jauch, J.M.

    1976-01-01

    The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)

  4. Joint probabilities and quantum cognition

    International Nuclear Information System (INIS)

    Acacio de Barros, J.

    2012-01-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  5. Joint probabilities and quantum cognition

    Energy Technology Data Exchange (ETDEWEB)

    Acacio de Barros, J. [Liberal Studies, 1600 Holloway Ave., San Francisco State University, San Francisco, CA 94132 (United States)

    2012-12-18

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  6. Spatial distribution and occurrence probability of regional new particle formation events in eastern China

    Directory of Open Access Journals (Sweden)

    X. Shen

    2018-01-01

    Full Text Available In this work, the spatial extent of new particle formation (NPF events and the relative probability of observing particles originating from different spatial origins around three rural sites in eastern China were investigated using the NanoMap method, using particle number size distribution (PNSD data and air mass back trajectories. The length of the datasets used were 7, 1.5, and 3 years at rural sites Shangdianzi (SDZ in the North China Plain (NCP, Mt. Tai (TS in central eastern China, and Lin'an (LAN in the Yangtze River Delta region in eastern China, respectively. Regional NPF events were observed to occur with the horizontal extent larger than 500 km at SDZ and TS, favoured by the fast transport of northwesterly air masses. At LAN, however, the spatial footprint of NPF events was mostly observed around the site within 100–200 km. Difference in the horizontal spatial distribution of new particle source areas at different sites was connected to typical meteorological conditions at the sites. Consecutive large-scale regional NPF events were observed at SDZ and TS simultaneously and were associated with a high surface pressure system dominating over this area. Simultaneous NPF events at SDZ and LAN were seldom observed. At SDZ the polluted air masses arriving over the NCP were associated with higher particle growth rate (GR and new particle formation rate (J than air masses from Inner Mongolia (IM. At TS the same phenomenon was observed for J, but GR was somewhat lower in air masses arriving over the NCP compared to those arriving from IM. The capability of NanoMap to capture the NPF occurrence probability depends on the length of the dataset of PNSD measurement but also on topography around the measurement site and typical air mass advection speed during NPF events. Thus the long-term measurements of PNSD in the planetary boundary layer are necessary in the further study of spatial extent and the probability of NPF events. The spatial

  7. Probability elements of the mathematical theory

    CERN Document Server

    Heathcote, C R

    2000-01-01

    Designed for students studying mathematical statistics and probability after completing a course in calculus and real variables, this text deals with basic notions of probability spaces, random variables, distribution functions and generating functions, as well as joint distributions and the convergence properties of sequences of random variables. Includes worked examples and over 250 exercises with solutions.

  8. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...

  9. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    International Nuclear Information System (INIS)

    Isaacson, J.A.; Canizares, C.R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux. 17 references

  10. The Chandra Source Catalog 2.0: Estimating Source Fluxes

    Science.gov (United States)

    Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  11. Most probable number methodology for quantifying dilute concentrations and fluxes of Escherichia coli O157:H7 in surface waters.

    Science.gov (United States)

    Jenkins, M B; Endale, D M; Fisher, D S; Gay, P A

    2009-02-01

    To better understand the transport and enumeration of dilute densities of Escherichia coli O157:H7 in agricultural watersheds, we developed a culture-based, five tube-multiple dilution most probable number (MPN) method. The MPN method combined a filtration technique for large volumes of surface water with standard selective media, biochemical and immunological tests, and a TaqMan confirmation step. This method determined E. coli O157:H7 concentrations as low as 0.1 MPN per litre, with a 95% confidence level of 0.01-0.7 MPN per litre. Escherichia coli O157:H7 densities ranged from not detectable to 9 MPN per litre for pond inflow, from not detectable to 0.9 MPN per litre for pond outflow and from not detectable to 8.3 MPN per litre for within pond. The MPN methodology was extended to mass flux determinations. Fluxes of E. coli O157:H7 ranged from 10(4) MPN per hour. This culture-based method can detect small numbers of viable/culturable E. coli O157:H7 in surface waters of watersheds containing animal agriculture and wildlife. This MPN method will improve our understanding of the transport and fate of E. coli O157:H7 in agricultural watersheds, and can be the basis of collections of environmental E. coli O157:H7.

  12. Distribution of flux-pinning energies in YBa2Cu3O7-δ and Bi2Sr2CaCu2O8+δ from flux noise

    International Nuclear Information System (INIS)

    Ferrari, M.J.; Johnson, M.; Wellstood, F.C.; Clarke, J.; Mitzi, D.; Rosenthal, P.A.; Eom, C.B.; Geballe, T.H.; Kapitulnik, A.; Beasley, M.R.

    1990-01-01

    The spectral density of the magnetic flux noise measured in high-temperature superconductors in low magnetic fields scales approximately as the inverse of the frequency and increases with temperature. We use the temperature and frequency dependence of the noise to determine the pinning energies of individual flux vortices in thermal equilibrium. The distribution of pinning energies peaks below 0.1 eV in YBa 2 Cu 3 O 7-δ and near 0.2 eV in Bi 2 Sr 2 CaCu 2 O 8+δ . The noise power is proportional to the ambient magnetic field, indicating that the vortex motion is uncorrelated

  13. A statistical model for deriving probability distributions of contamination for accidental releases

    International Nuclear Information System (INIS)

    ApSimon, H.M.; Davison, A.C.

    1986-01-01

    Results generated from a detailed long-range transport model, MESOS, simulating dispersal of a large number of hypothetical releases of radionuclides in a variety of meteorological situations over Western Europe have been used to derive a simpler statistical model, MESOSTAT. This model may be used to generate probability distributions of different levels of contamination at a receptor point 100-1000 km or so from the source (for example, across a frontier in another country) without considering individual release and dispersal scenarios. The model is embodied in a series of equations involving parameters which are determined from such factors as distance between source and receptor, nuclide decay and deposition characteristics, release duration, and geostrophic windrose at the source. Suitable geostrophic windrose data have been derived for source locations covering Western Europe. Special attention has been paid to the relatively improbable extreme values of contamination at the top end of the distribution. The MESOSTAT model and its development are described, with illustrations of its use and comparison with the original more detailed modelling techniques. (author)

  14. Probability distributions of placental morphological measurements and origins of variability of placental shapes.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O

    2013-06-01

    While the mean shape of human placenta is round with centrally inserted umbilical cord, significant deviations from this ideal are fairly common, and may be clinically meaningful. Traditionally, they are explained by trophotropism. We have proposed a hypothesis explaining typical variations in placental shape by randomly determined fluctuations in the growth process of the vascular tree. It has been recently reported that umbilical cord displacement in a birth cohort has a log-normal probability distribution, which indicates that the displacement between an initial point of origin and the centroid of the mature shape is a result of accumulation of random fluctuations of the dynamic growth of the placenta. To confirm this, we investigate statistical distributions of other features of placental morphology. In a cohort of 1023 births at term digital photographs of placentas were recorded at delivery. Excluding cases with velamentous cord insertion, or missing clinical data left 1001 (97.8%) for which placental surface morphology features were measured. Best-fit statistical distributions for them were obtained using EasyFit. The best-fit distributions of umbilical cord displacement, placental disk diameter, area, perimeter, and maximal radius calculated from the cord insertion point are of heavy-tailed type, similar in shape to log-normal distributions. This is consistent with a stochastic origin of deviations of placental shape from normal. Deviations of placental shape descriptors from average have heavy-tailed distributions similar in shape to log-normal. This evidence points away from trophotropism, and towards a spontaneous stochastic evolution of the variants of placental surface shape features. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  16. The Effect of Breaking Waves on CO_2 Air-Sea Fluxes in the Coastal Zone

    Science.gov (United States)

    Gutiérrez-Loza, Lucía; Ocampo-Torres, Francisco J.; García-Nava, Héctor

    2018-03-01

    The influence of wave-associated parameters controlling turbulent CO_2 fluxes through the air-sea interface is investigated in a coastal region. A full year of high-quality data of direct estimates of air-sea CO_2 fluxes based on eddy-covariance measurements is presented. The study area located in Todos Santos Bay, Baja California, Mexico, is a net sink of CO_2 with a mean flux of -1.3 μmol m^{-2}s^{-1} (-41.6 mol m^{-2}yr^{-1} ). The results of a quantile-regression analysis computed between the CO_2 flux and, (1) wind speed, (2) significant wave height, (3) wave steepness, and (4) water temperature, suggest that the significant wave height is the most correlated parameter with the magnitude of the flux but the behaviour of the relation varies along the probability distribution function, with the slopes of the regression lines presenting both positive and negative values. These results imply that the presence of surface waves in coastal areas is the key factor that promotes the increase of the flux from and into the ocean. Further analysis suggests that the local characteristics of the aqueous and atmospheric layers might determine the direction of the flux.

  17. An analog computer method for solving flux distribution problems in multi region nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1963-04-15

    The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)

  18. Introduction to probability with R

    CERN Document Server

    Baclawski, Kenneth

    2008-01-01

    FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable

  19. In favor of general probability distributions: lateral prefrontal and insular cortices respond to stimulus inherent, but irrelevant differences.

    Science.gov (United States)

    Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A

    2016-04-01

    A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.

  20. Collision probability method for discrete presentation of space in cylindrical cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1969-08-01

    A suitable numerical method for integration of one-group integral transport equation is obtained by series expansion of flux and neutron source by radius squared, when calculating the parameters of cylindrically symmetric reactor cell. Separation of variables in (x,y) plane enables analytical integration in one direction and efficient Gauss quadrature formula in the second direction. White boundary condition is used for determining the neutron balance. Suitable choice of spatial points distribution in the fuel and moderator condenses the procedure for determining the transport matrix and accelerates the convergence when calculating the absorption in the reactor cell. In comparison to other collision probability methods the proposed procedure is a simple mathematical model which demands smaller computer capacity and shorter computing time

  1. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions.

    Science.gov (United States)

    Wenger, Seth J; Freeman, Mary C

    2008-10-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  2. Transient critical heat flux under flow coast-down in vertical annulus with non-uniform heat flux distribution

    International Nuclear Information System (INIS)

    Moon, S.K.; Chun, S.Y.; Choi, K.Y.; Yang, S.K.

    2001-01-01

    An experimental study on transient critical heat flux (CHF) under flow coast-down has been performed for water flow in a non-uniformly heated vertical annulus under low flow and a wide range of pressure conditions. The objectives of this study are to systematically investigate the effect of the flow transient on the CHF and to compare the transient CHF with steady state CHF. The transient CHF experiments have been performed for three kinds of flow transient modes based on the coast-down data of the Kori 3/4 nuclear power plant reactor coolant pump. Most of the CHFs occurred in the annular-mist flow regime. Thus, it means that the possible CHF mechanism might be the liquid film dryout in the annular-mist flow regime. For flow transient mode with the smallest flow reduction rate, the time-to-CHF is the largest. At the same inlet subcooling, system pressure and heat flux, the effect of the initial mass flux on the critical mass flux can be negligible. However, the effect of the initial mass flux on the time-to-CHF becomes large as the heat flux decreases. Usually, the critical mass flux is large for slow flow reduction. There is a pressure effect on the ratio of the transient CHF data to steady state CHF data. Some conventional correlations show relatively better CHF prediction results for high system pressure, high quality and slow transient modes than for low system pressure, low quality and fast transient modes. (author)

  3. Probability distribution of distance in a uniform ellipsoid: Theory and applications to physics

    International Nuclear Information System (INIS)

    Parry, Michelle; Fischbach, Ephraim

    2000-01-01

    A number of authors have previously found the probability P n (r) that two points uniformly distributed in an n-dimensional sphere are separated by a distance r. This result greatly facilitates the calculation of self-energies of spherically symmetric matter distributions interacting by means of an arbitrary radially symmetric two-body potential. We present here the analogous results for P 2 (r;ε) and P 3 (r;ε) which respectively describe an ellipse and an ellipsoid whose major and minor axes are 2a and 2b. It is shown that for ε=(1-b 2 /a 2 ) 1/2 ≤1, P 2 (r;ε) and P 3 (r;ε) can be obtained as an expansion in powers of ε, and our results are valid through order ε 4 . As an application of these results we calculate the Coulomb energy of an ellipsoidal nucleus, and compare our result to an earlier result quoted in the literature. (c) 2000 American Institute of Physics

  4. On probability-possibility transformations

    Science.gov (United States)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  5. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  6. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  7. Tools for Bramwell-Holdsworth-Pinton Probability Distribution

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2009-01-01

    Full Text Available This paper is a synthesis of a range of papers presentedat various conferences related to distribution Bramwell-Holdsworth-Pinton. S. T. Bramwell, P. C. W. Holdsworth, J. F.Pinton introduced a new non-parametric distribution (calledBHP after studying some magnetization problems in 2D. Probabilitydensity function of distribution can be aproximated as amodified GFT (Gumbel-Fisher-Tippitt distribution.

  8. Theoretical analysis of nuclear reactors (Phase I), I-V, Part V, Determining the fine flux distribution

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1962-07-01

    Mono energetic neutron transport equation was solved by Carlson numerical method in cylindrical geometry. S n code was developed for the digital computer ZUSE Z23. Neutron flux distribution was determined for the RA reactor cell by applying S 4 approximation. Reactor cell was treated as D 2 O-U-D 2 O system. Time of iteration was 185 s [sr

  9. Flux and energy deposition distribution studies inside the irradiation room of the portuguese 60Co irradiation facility

    International Nuclear Information System (INIS)

    Portugal, Luis; Oliveira, Carlos

    2008-01-01

    Full text: In December 2003 the irradiator of the Portuguese 60 Co irradiation facility, UTR, was replenished. Eighteen new sources were loaded and the older ones (156) were rearranged. The result was an irradiator with about 10.2 P Bq of total activity. The active area of the irradiator has also increased. Now it uses twenty five of the thirty tubes of the source rack, nine more than in the previous geometry. This facility was designed mainly for sterilisation of medical devices. However it is also used for the irradiation of other products such as cork stoppers, plastics and a limited number of food and feed. The purpose of this work is to perform dosimetric studies inside the irradiation room of a 60 Co irradiation facility, particularly, the flux and energy deposition distributions. The MCNPX code was used for the simulation of the facility. The track average mesh tally capabilities of MCNPX were used to plot the photon flux and energy deposition distributions. This tool provides a fast way for flux and energy deposition mapping. The absorbed dose distribution near the walls of the irradiation room was also calculated. Instead of using meshtallys as before, the average absorbed dose inside boxes lined with the walls was determined and afterwards a plot of its distribution was made. The absorbed dose rates obtained ranged from 5 to 500 Gy.h -1 depending on material being irradiated in process and the location on the wall. These positions can be useful for fixed irradiation purposes. Both dosimetric studies were done considering two different materials being irradiated in the process: cork stoppers and water, materials with quite different densities (0.102 and 1 g.cm-3, respectively). These studies showed some important characteristics of the radiation fields inside the irradiation room, namely its spatial heterogeneity. Tunnelling and shadow effects were enhanced when the product boxes increases its density. Besides a deeper dosimetric understanding of the

  10. Constituent quarks as clusters in quark-gluon-parton model. [Total cross sections, probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1976-12-01

    We present a quark-gluon-parton model in which quark-partons and gluons make clusters corresponding to two or three constituent quarks (or anti-quarks) in the meson or in the baryon, respectively. We explicitly construct the constituent quark state (cluster), by employing the Kuti-Weisskopf theory and by requiring the scaling. The quark additivity of the hadronic total cross sections and the quark counting rules on the threshold powers of various distributions are satisfied. For small x (Feynman fraction), it is shown that the constituent quarks and quark-partons have quite different probability distributions. We apply our model to hadron-hadron inclusive reactions, and clarify that the fragmentation and the diffractive processes relate to the constituent quark distributions, while the processes in or near the central region are controlled by the quark-partons. Our model gives the reasonable interpretation for the experimental data and much improves the usual ''constituent interchange model'' result near and in the central region (x asymptotically equals x sub(T) asymptotically equals 0).

  11. Concise method for evaluating the probability distribution of the marginal cost of power generation

    International Nuclear Information System (INIS)

    Zhang, S.H.; Li, Y.Z.

    2000-01-01

    In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)

  12. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    Science.gov (United States)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  13. Domain walls, near-BPS bubbles, and probabilities in the landscape

    International Nuclear Information System (INIS)

    Ceresole, Anna; Dall'Agata, Gianguido; Giryavets, Alexander; Kallosh, Renata; Linde, Andrei

    2006-01-01

    We develop a theory of static Bogomol'nyi-Prasad-Sommerfield (BPS) domain walls in stringy landscape and present a large family of BPS walls interpolating between different supersymmetric vacua. Examples include Kachru, Kallosh, Linde, Trivedi models, STU models, type IIB multiple flux vacua, and models with several Minkowski and anti-de Sitter vacua. After the uplifting, some of the vacua become de Sitter (dS), whereas some others remain anti-de Sitter. The near-BPS walls separating these vacua may be seen as bubble walls in the theory of vacuum decay. As an outcome of our investigation of the BPS walls, we found that the decay rate of dS vacua to a collapsing space with a negative vacuum energy can be quite large. The parts of space that experience a decay to a collapsing space, or to a Minkowski vacuum, never return back to dS space. The channels of irreversible vacuum decay serve as sinks for the probability flow. The existence of such sinks is a distinguishing feature of the landscape. We show that it strongly affects the probability distributions in string cosmology

  14. Validation of absolute axial neutron flux distribution calculations with MCNP with 197Au(n,γ)198Au reaction rate distribution measurements at the JSI TRIGA Mark II reactor.

    Science.gov (United States)

    Radulović, Vladimir; Štancar, Žiga; Snoj, Luka; Trkov, Andrej

    2014-02-01

    The calculation of axial neutron flux distributions with the MCNP code at the JSI TRIGA Mark II reactor has been validated with experimental measurements of the (197)Au(n,γ)(198)Au reaction rate. The calculated absolute reaction rate values, scaled according to the reactor power and corrected for the flux redistribution effect, are in good agreement with the experimental results. The effect of different cross-section libraries on the calculations has been investigated and shown to be minor. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Heat tranfer decrease during water boiling in a tube for the heat flux step distribution by the tube length

    International Nuclear Information System (INIS)

    Remizov, O.V.; Sergeev, V.V.; Yurkov, Yu.I.

    1983-01-01

    The effect of the heat flux distribution along the circular tube length on supercritical convective heat transfer at parameters typical for steam generators heated by liquid metal is studied. The effect of conditions in a under- and a supercritical zones of a vertical tube with independently heated lower and upper sections on supercritical convective heat transfer is studied on a water circulation loop at 9.8-17.7 MPa pressure and 330-1000 kg/m 2 s mass velocities. The experimental heat fluxes varied within the following limits: at the upper section from 0 to 474 kW/m 2 , at the lower section from 190 to 590 kW/m 2 . Analysis of the obtained data shows that when heat flux changes in the supercritical zone rewetting of the heated surface and simultaneous existence of two critical zones are observed. The effect of heat flux in the supercritical zone on convective heat transfer is ambiguous: the heat flux growth up to 60-100 kW/m 2 leads to increasing minimum values of the heat transfer factor in the supercritical zone, and a further heat flux growth - to their reduction. The conclusion is made that the value of heat flux in the undercritical zone affects convective heat transfer in the supercritical zone mainly through changing the value of critical vapour content

  16. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    Science.gov (United States)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  17. A technical basis for the flux corrected local conditions critical heat flux correlation

    International Nuclear Information System (INIS)

    Luxat, J.C.

    2008-01-01

    The so-called 'flux-corrected' local conditions CHF correlation was developed at Ontario Hydro in the 1980's and was demonstrated to successfully correlate the Onset of Intermittent Dryout (OID) CHF data for 37-element fuel with a downstream-skewed axial heat flux distribution. However, because the heat flux correction factor appeared to be an ad-hoc, albeit a successful modifying factor in the correlation, there was reluctance to accept the correlation more generally. This paper presents a thermalhydraulic basis, derived from two-phase flow considerations, that supports the appropriateness of the heat flux correction as a local effects modifying factor. (author)

  18. The correlation of defect distribution in collisional phase with measured cascade collapse probability

    International Nuclear Information System (INIS)

    Morishita, K.; Ishino, S.; Sekimura, N.

    1995-01-01

    The spatial distributions of atomic displacement at the end of the collisional phase of cascade damage processes were calculated using the computer simulation code MARLOWE, which is based on the binary collision approximation (BCA). The densities of the atomic displacement were evaluated in high dense regions (HDRs) of cascades in several pure metals (Fe, Ni, Cu, Ag, Au, Mo and W). They were compared with the measured cascade collapse probabilities reported in the literature where TEM observations were carried out using thin metal foils irradiated by low-dose ions at room temperature. We found that there exists the minimum or ''critical'' values of the atomic displacement densities for the HDR to collapse into TEM-visible vacancy clusters. The critical densities are generally independent of the cascade energy in the same metal. Furthermore, the material dependence of the critical densities can be explained by the difference in the vacancy mobility at the melting temperature of target materials. This critical density calibration, which is extracted from the ion-irradiation experiments and the BCA simulations, is applied to estimation of cascade collapse probabilities in the metals irradiated by fusion neutrons. (orig.)

  19. Estimation of subcriticality and fuel concentration by using 'pattern matching' of neutron flux distribution under non uniformed system

    International Nuclear Information System (INIS)

    Ishitani, Kazuki; Yamane, Yoshihiro

    1999-01-01

    In nuclear fuel reprocessing plants, monitoring the spatial profile of neutron flux to infer subcriticality and distribution of fuel concentration using detectors such as PSPC, is very beneficial in sight of criticality safety. In this paper a method of subcriticality and fuel concentration estimation which is supposed to use under non-uniformed system is proposed. Its basic concept is the pattern matching between measured neutron flux distribution and beforehand calculated one. In any kind of subcriticality estimation, we can regard that measured neutron counts put any kind of black box, and then this black box outputs subcriticality. We proposed the use of artificial neural network or 'pattern matching' as black box which have no theoretical clear base. These method are wholly based on the calculated value as recently advancement of computer code accuracy for criticality safety. The most difference between indirect bias estimation method and our method is that our new approach target are the unknown non-uniform system. (J.P.N.)

  20. Introduction to probability with statistical applications

    CERN Document Server

    Schay, Géza

    2016-01-01

    Now in its second edition, this textbook serves as an introduction to probability and statistics for non-mathematics majors who do not need the exhaustive detail and mathematical depth provided in more comprehensive treatments of the subject. The presentation covers the mathematical laws of random phenomena, including discrete and continuous random variables, expectation and variance, and common probability distributions such as the binomial, Poisson, and normal distributions. More classical examples such as Montmort's problem, the ballot problem, and Bertrand’s paradox are now included, along with applications such as the Maxwell-Boltzmann and Bose-Einstein distributions in physics. Key features in new edition: * 35 new exercises * Expanded section on the algebra of sets * Expanded chapters on probabilities to include more classical examples * New section on regression * Online instructors' manual containing solutions to all exercises

  1. Effect of the geometrical parameters of an optical integrator on the unformity of the radiation flux distribution

    International Nuclear Information System (INIS)

    Vishnyakova, T.P.; Klychev, Sh.I.

    1992-01-01

    The use of optical mixers in the optical irradiators of simulators of direct and concentrated solar radiation has been proposed. In this paper, the parameters of an optical mixer are calculated geometrically, and the effect of the parameters of the optical mixer on the unformity of the irradiance distribution η of the radiation flux on the detector is investigated. These investigations show that the light distribution from an optical mixer is close to the characteristics of an ideal uniform emitter within the region from 0 to the limit of α. 5 refs., 4 figs

  2. Spatial distribution of the neutron flux in the IEA-R1 reactor core obtained by means of foil activation

    International Nuclear Information System (INIS)

    Mestnik Filho, J.

    1979-01-01

    A three-dimensional distribution of the neutron flux in IEA-R1 reactor, obtained by activating gold foils, is presented. The foils of diameter 8mm and thickness 0,013mm were mounted on lucite plates and located between the fuel element plates. Foil activities were measured using a 3x3 inches Nal(Tl) scintilation detector calibrated against a 4πβγ coincidence detector. Foil positions were chosen to minimize the errors of measurement; the overall estimated error on the measured flux is 5%. (Author) [pt

  3. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    Science.gov (United States)

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving

  4. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  5. Poisson Processes in Free Probability

    OpenAIRE

    An, Guimei; Gao, Mingchu

    2015-01-01

    We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso...

  6. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  7. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical ...

  9. Comparison between measured and computed magnetic flux density distribution of simulated transformer core joints assembled from grain-oriented and non-oriented electrical steel

    Directory of Open Access Journals (Sweden)

    Hamid Shahrouzi

    2018-04-01

    Full Text Available The flux distribution in an overlapped linear joint constructed in the central region of an Epstein Square was studied experimentally and results compared with those obtained using a computational magnetic field solver. High permeability grain-oriented (GO and low permeability non-oriented (NO electrical steels were compared at a nominal core flux density of 1.60 T at 50 Hz. It was found that the experimental results only agreed well at flux densities at which the reluctance of different paths of the flux are similar. Also it was revealed that the flux becomes more uniform when the working point of the electrical steel is close to the knee point of the B-H curve of the steel.

  10. Probability distribution function values in mobile phones;Valores de funciones de distribución de probabilidad en el teléfono móvil

    Directory of Open Access Journals (Sweden)

    Luis Vicente Chamorro Marcillllo

    2013-06-01

    Full Text Available Engineering, within its academic and application forms, as well as any formal research work requires the use of statistics and every inferential statistical analysis requires the use of values of probability distribution functions that are generally available in tables. Generally, management of those tables have physical problems (wasteful transport and wasteful consultation and operational (incomplete lists and limited accuracy. The study, “Probability distribution function values in mobile phones”, permitted determining – through a needs survey applied to students involved in statistics studies at Universidad de Nariño – that the best known and most used values correspond to Chi-Square, Binomial, Student’s t, and Standard Normal distributions. Similarly, it showed user’s interest in having the values in question within an alternative means to correct, at least in part, the problems presented by “the famous tables”. To try to contribute to the solution, we built software that allows immediately and dynamically obtaining the values of the probability distribution functions most commonly used by mobile phones.

  11. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  12. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  13. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  14. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  15. Approximation of ruin probabilities via Erlangized scale mixtures

    DEFF Research Database (Denmark)

    Peralta, Oscar; Rojas-Nandayapa, Leonardo; Xie, Wangyue

    2018-01-01

    In this paper, we extend an existing scheme for numerically calculating the probability of ruin of a classical Cramér–Lundbergreserve process having absolutely continuous but otherwise general claim size distributions. We employ a dense class of distributions that we denominate Erlangized scale...... a simple methodology for constructing a sequence of distributions having the form Π⋆G with the purpose of approximating the integrated tail distribution of the claim sizes. Then we adapt a recent result which delivers an explicit expression for the probability of ruin in the case that the claim size...... distribution is modeled as an Erlangized scale mixture. We provide simplified expressions for the approximation of the probability of ruin and construct explicit bounds for the error of approximation. We complement our results with a classical example where the claim sizes are heavy-tailed....

  16. Neutron flux distribution measurement in the Fort St. Vrain initial core (results of Fort St. Vrain start-up test A-7)

    International Nuclear Information System (INIS)

    Marshall, A.C.; Brown, J.R.

    1975-01-01

    A description is given of a test to measure the axial flux distribution at several radial locations in the Fort St. Vrain core representing unrodded, rodded, and partially rodded regions. The measurements were intended to verify the calculational accuracy of the three-dimensional calculational model used to compute axial power distributions for the Fort St. Vrain core. (U.S.)

  17. The probability distribution of side-chain conformations in [Leu] and [Met]enkephalin determines the potency and selectivity to mu and delta opiate receptors

    DEFF Research Database (Denmark)

    Nielsen, Bjørn Gilbert; Jensen, Morten Østergaard; Bohr, Henrik

    2003-01-01

    The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis...... in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity...... of [Leu]enkephalin and [Met]enkephalin to the known mu- and delta-type opiate receptors to which they bind as agonists. Other plausible consequences of these probability distributions are discussed in relation to the way in which they may influence the dynamics of the synapse....

  18. The probability representation as a new formulation of quantum mechanics

    International Nuclear Information System (INIS)

    Man'ko, Margarita A; Man'ko, Vladimir I

    2012-01-01

    We present a new formulation of conventional quantum mechanics, in which the notion of a quantum state is identified via a fair probability distribution of the position measured in a reference frame of the phase space with rotated axes. In this formulation, the quantum evolution equation as well as the equation for finding energy levels are expressed as linear equations for the probability distributions that determine the quantum states. We also give the integral transforms relating the probability distribution (called the tomographic-probability distribution or the state tomogram) to the density matrix and the Wigner function and discuss their connection with the Radon transform. Qudit states are considered and the invertible map of the state density operators onto the probability vectors is discussed. The tomographic entropies and entropic uncertainty relations are reviewed. We demonstrate the uncertainty relations for the position and momentum and the entropic uncertainty relations in the tomographic-probability representation, which is suitable for an experimental check of the uncertainty relations.

  19. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2017-01-01

    significantly due to risk aversion. We characterize an approach for eliciting the entire subjective belief distribution that is minimally biased due to risk aversion. We offer simulated examples to demonstrate the intuition of our approach. We also provide theory to formally characterize our framework. And we...... provide experimental evidence which corroborates our theoretical results. We conclude that for empirically plausible levels of risk aversion, one can reliably elicit most important features of the latent subjective belief distribution without undertaking calibration for risk attitudes providing one...

  20. Impact probabilities of meteoroid streams with artificial satellites: An assessment

    International Nuclear Information System (INIS)

    Foschini, L.; Cevolani, G.

    1997-01-01

    Impact probabilities of artificial satellites with meteoroid streams were calculated using data collected with the CNR forward scatter (FS) bistatic radar over the Bologna-Lecce baseline (about 700 km). Results show that impact probabilities are 2 times higher than other previously calculated values. Nevertheless, although catastrophic impacts are still rare even in the case of meteor storm conditions, it is expected that high meteoroid fluxes can erode satellites surfaces and weaken their external structures

  1. Electron energy distribution control by fiat: breaking from the conventional flux ratio scaling rules in etch

    Science.gov (United States)

    Ranjan, Alok; Wang, Mingmei; Sherpa, Sonam; Ventzek, Peter

    2015-03-01

    With shrinking critical dimensions, minimizing each of aspect ratio dependent etching (ARDE), bowing, undercut, selectivity, and within die uniformly across a wafer is met by trading off one requirement against another. The problem of trade-offs is especially critical. At the root of the problem is that roles radical flux, ion flux and ion energy play may be both good and bad. Increasing one parameter helps meeting one requirement but hinders meeting the other. Managing process by managing flux ratios and ion energy alone with conventional sources is not adequate because surface chemistry is uncontrollable. At the root of lack of control is that the electron energy distribution function (eedf) has not been controlled. Fortunately the high density surface wave sources control the eedf by fiat. High density surface wave sources are characterized by distinct plasma regions: an active plasma generation region with high electron temperature (Te) and an ionization free but chemistry rich diffusive region (low Te region). Pressure aids is segregating the regions by proving a means for momentum relaxation between the source and downstream region. "Spatial pulsing" allows access to plasma chemistry with reasonably high ion flux, from the active plasma generation region, just above the wafer. Low plasma potential enables precise passivation of surfaces which is critical for atomic layer etch (ALE) or high precision etch where the roles of plasma species can be limited to their purposed roles. High precision etch need not be at the cost of speed and manufacturability. Large ion flux at precisely controlled ion energy with RLSATM realizes fast desorption steps for ALE without compromising process throughput and precision.

  2. Influence of nucleon density distribution in nucleon emission probability

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Nandy, Maitreyee; Mohanty, A.K.; Sarkar, P.K.; Gambhir, Y.K.

    2014-01-01

    Different decay modes are observed in heavy ion reactions at low to intermediate energies. It is interesting to study total neutron emission in these reactions which may be contributed by all/many of these decay modes. In an attempt to understand the importance of mean field and the entrance channel angular momentum, we study their influence on the emission probability of nucleons in heavy ion reactions in this work. This study owes its significance to the fact that once population of different states are determined, emission probability governs the double differential neutron yield

  3. Turbulent flux and the diffusion of passive tracers in electrostatic turbulence

    DEFF Research Database (Denmark)

    Basu, R.; Jessen, T.; Naulin, V.

    2003-01-01

    The connection between the diffusion of passive tracer particles and the anomalous turbulent flux in electrostatic drift-wave turbulence is investigated by direct numerical solutions of the 2D Hasegawa-Wakatani equations. The probability density functions for the point-wise and flux surface...

  4. Probability densities and Lévy densities

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler

    For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated.......For positive Lévy processes (i.e. subordinators) formulae are derived that express the probability density or the distribution function in terms of power series in time t. The applicability of the results to finance and to turbulence is briefly indicated....

  5. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  6. Transmission probability method for solving neutron transport equation in three-dimensional triangular-z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Liu Guoming [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)], E-mail: gmliusy@gmail.com; Wu Hongchun; Cao Liangzhi [Department of Nuclear Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2008-09-15

    This paper presents a transmission probability method (TPM) to solve the neutron transport equation in three-dimensional triangular-z geometry. The source within the mesh is assumed to be spatially uniform and isotropic. At the mesh surface, the constant and the simplified P{sub 1} approximation are invoked for the anisotropic angular flux distribution. Based on this model, a code TPMTDT is encoded. It was verified by three 3D Takeda benchmark problems, in which the first two problems are in XYZ geometry and the last one is in hexagonal-z geometry, and an unstructured geometry problem. The results of the present method agree well with those of Monte-Carlo calculation method and Spherical Harmonics (P{sub N}) method.

  7. Quantification of motor network dynamics in Parkinson's disease by means of landscape and flux theory.

    Directory of Open Access Journals (Sweden)

    Han Yan

    Full Text Available The basal ganglia neural circuit plays an important role in motor control. Despite the significant efforts, the understanding of the principles and underlying mechanisms of this modulatory circuit and the emergence of abnormal synchronized oscillations in movement disorders is still challenging. Dopamine loss has been proved to be responsible for Parkinson's disease. We quantitatively described the dynamics of the basal ganglia-thalamo-cortical circuit in Parkinson's disease in terms of the emergence of both abnormal firing rates and firing patterns in the circuit. We developed a potential landscape and flux framework for exploring the modulatory circuit. The driving force of the circuit can be decomposed into a gradient of the potential, which is associated with the steady-state probability distributions, and the curl probability flux term. We uncovered the underlying potential landscape as a Mexican hat-shape closed ring valley where abnormal oscillations emerge due to dopamine depletion. We quantified the global stability of the network through the topography of the landscape in terms of the barrier height, which is defined as the potential difference between the maximum potential inside the ring and the minimum potential along the ring. Both a higher barrier and a larger flux originated from detailed balance breaking result in more stable oscillations. Meanwhile, more energy is consumed to support the increasing flux. Global sensitivity analysis on the landscape topography and flux indicates how changes in underlying neural network regulatory wirings and external inputs influence the dynamics of the system. We validated two of the main hypotheses(direct inhibition hypothesis and output activation hypothesis on the therapeutic mechanism of deep brain stimulation (DBS. We found GPe appears to be another effective stimulated target for DBS besides GPi and STN. Our approach provides a general way to quantitatively explore neural networks and may

  8. Theoretical analysis of nuclear reactors (Phase II), I-V, Part V, Determining the fine neutron flux distribution by Pn method

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1962-10-01

    Expression for spherical harmonic moments were applied. They were obtained by spherical harmonics expansion of monoenergetic transport equation. This report presents the procedure for calculating the neutron flux distribution in the nine-zone reactor cell of the RA reactor in Vinca. The procedure was modelled for digital computer ZUSE Z-23 by expansion of the diagram of the automated P 3 code, which is adequate for P n code with minor changes. The needed subroutines were developed. The most important ones were those for modified first and second order Bessel functions of n-th order. Computer Z-23 was operating only 15 hours during three months, and thus only the subroutines for modified Bessel functions could be tested and the obtained results were excellent. For the mentioned reason the neutron flux distribution will be calculated in the forthcoming period [sr

  9. PAH distribution and mass fluxes in the Three Gorges Reservoir after impoundment of the Three Gorges Dam.

    Science.gov (United States)

    Deyerling, Dominik; Wang, Jingxian; Hu, Wei; Westrich, Bernhard; Peng, Chengrong; Bi, Yonghong; Henkelmann, Bernhard; Schramm, Karl-Werner

    2014-09-01

    Mass fluxes of polycyclic aromatic hydrocarbons (PAHs) were calculated for the Three Gorges Reservoir (TGR) in China, based on concentration and discharge data from the Yangtze River. Virtual Organisms (VOs) have been applied during four campaigns in 2008, 2009 (twice) and 2011 at sampling sites distributed from Chongqing to Maoping. The total PAH mass fluxes ranged from 110 to 2,160 mg s(-1). Highest loads were determined at Chongqing with a decreasing trend towards Maoping in all four sampling campaigns. PAH remediation capacity of the TGR was found to be high as the mass flux reduced by more than half from upstream to downstream. Responsible processes are thought to be adsorption of PAH to suspended particles, dilution and degradation. Furthermore, the dependence of PAH concentration upon water depth was investigated at Maoping in front of the Three Gorges Dam. Although considerable differences could be revealed, there was no trend observable. Sampling of water with self-packed filter cartridges confirmed more homogenous PAH depth distribution. Moreover, PAH content of suspended particles was estimated from water concentrations gathered by VOs based on a water-particle separation model and subsequently compared to PAH concentration measured in water and in filter cartridges. It could be shown that the modeled data predicts the concentration caused by particle-bound PAHs to be about 6 times lower than PAHs dissolved in water. Besides, the model estimates the proportions of 5- and 6-ring PAHs being higher than in water phase. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  11. Thermal neutron flux distribution in ET-RR-2 reactor thermal column

    Directory of Open Access Journals (Sweden)

    Imam Mahmoud M.

    2002-01-01

    Full Text Available The thermal column in the ET-RR-2 reactor is intended to promote a thermal neutron field of high intensity and purity to be used for following tasks: (a to provide a thermal neutron flux in the neutron transmutation silicon doping, (b to provide a thermal flux in the neutron activation analysis position, and (c to provide a thermal neutron flux of high intensity to the head of one of the beam tubes leading to the room specified for boron thermal neutron capture therapy. It was, therefore, necessary to determine the thermal neutron flux at above mentioned positions. In the present work, the neutron flux in the ET-RR-2 reactor system was calculated by applying the three dimensional diffusion depletion code TRITON. According to these calculations, the reactor system is composed of the core, surrounding external irradiation grid, beryllium block, thermal column and the water reflector in the reactor tank next to the tank wall. As a result of these calculations, the thermal neutron fluxes within the thermal column and at irradiation positions within the thermal column were obtained. Apart from this, the burn up results for the start up core calculated according to the TRITION code were compared with those given by the reactor designer.

  12. A preliminary assessment of the effects of heat flux distribution and penetration on the creep rupture of a reactor vessel lower head

    International Nuclear Information System (INIS)

    Chu, T.Y.; Bentz, J.; Simpson, R.; Witt, R.

    1997-01-01

    The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF program investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented

  13. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  14. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Smirnov Vladimir Alexandrovich

    2012-10-01

    Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

  15. Axial and radial distribution of neutron fluxes in the irradiation channels of the Ghana Research Reactor-1 using foil activation analysis and Monte Carlo

    International Nuclear Information System (INIS)

    Abrefah, G.R.

    2009-02-01

    The Monte-Carlo method and experimental methods were used to determine the neutron fluxes in the irradiation channels of the Ghana Research Reactor -1. The MCNP5 code was used for this purpose to simulate the radial and axial distribution of the neutron fluxes within all the ten irradiation channels. The results obtained were compared with the experimental results. After the MCNP simulation and experimental procedure, it was observed that axially, the fluxes rise to a peak before falling and then finally leveling out. Axially and radially, it was also observed that the fluxes in the centre of the channels were lower than on the sides. Radially, the fluxes dip in the centre while it increases steadily towards the sides of the channels. The results have shown that there are flux variations within the irradiation channels both axially and radially. (au)

  16. Probability Distributions for Cyclone Key Parameters and Cyclonic Wind Speed for the East Coast of Indian Region

    Directory of Open Access Journals (Sweden)

    Pradeep K. Goyal

    2011-09-01

    Full Text Available This paper presents a study conducted on the probabilistic distribution of key cyclone parameters and the cyclonic wind speed by analyzing the cyclone track records obtained from India meteorological department for east coast region of India. The dataset of historical landfalling storm tracks in India from 1975–2007 with latitude /longitude and landfall locations are used to map the cyclone tracks in a region of study. The statistical tests were performed to find a best fit distribution to the track data for each cyclone parameter. These parameters include central pressure difference, the radius of maximum wind speed, the translation velocity, track angle with site and are used to generate digital simulated cyclones using wind field simulation techniques. For this, different sets of values for all the cyclone key parameters are generated randomly from their probability distributions. Using these simulated values of the cyclone key parameters, the distribution of wind velocity at a particular site is obtained. The same distribution of wind velocity at the site is also obtained from actual track records and using the distributions of the cyclone key parameters as published in the literature. The simulated distribution is compared with the wind speed distributions obtained from actual track records. The findings are useful in cyclone disaster mitigation.

  17. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Science.gov (United States)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  18. The flux distribution from a 1.25m2 target aligned heliostat: comparison of ray tracing and experimental results

    CSIR Research Space (South Africa)

    Maliage, M

    2012-05-01

    Full Text Available The purpose of this paper is to validate SolTrace for concentrating solar investigations at CSIR by means of a test case: the comparison of the flux distribution in the focal spot of a 1.25 m2 target aligned heliostat predicted by the ray tracing...

  19. Spatial probability aids visual stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Michael Druker

    2010-08-01

    Full Text Available We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming and if they were near the high probability hotspot (probability cuing. In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features.

  20. Wigner function and the probability representation of quantum states

    Directory of Open Access Journals (Sweden)

    Man’ko Margarita A.

    2014-01-01

    Full Text Available The relation of theWigner function with the fair probability distribution called tomographic distribution or quantum tomogram associated with the quantum state is reviewed. The connection of the tomographic picture of quantum mechanics with the integral Radon transform of the Wigner quasidistribution is discussed. The Wigner–Moyal equation for the Wigner function is presented in the form of kinetic equation for the tomographic probability distribution both in quantum mechanics and in the classical limit of the Liouville equation. The calculation of moments of physical observables in terms of integrals with the state tomographic probability distributions is constructed having a standard form of averaging in the probability theory. New uncertainty relations for the position and momentum are written in terms of optical tomograms suitable for directexperimental check. Some recent experiments on checking the uncertainty relations including the entropic uncertainty relations are discussed.

  1. Benchmarking PARTISN with Analog Monte Carlo: Moments of the Neutron Number and the Cumulative Fission Number Probability Distributions

    Energy Technology Data Exchange (ETDEWEB)

    O' Rourke, Patrick Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-27

    The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.

  2. Application of the Galerkin's method to the solution of the one-dimensional integral transport equation: generalized collision probabilities taken in account the flux gradient and the linearly anisotropic scattering

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-04-01

    For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr

  3. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    Science.gov (United States)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  4. Uncertainty about probability: a decision analysis perspective

    International Nuclear Information System (INIS)

    Howard, R.A.

    1988-01-01

    The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group

  5. The area distribution of two-dimensional random walks and non-Hermitian Hofstadter quantum mechanics

    International Nuclear Information System (INIS)

    Matveenko, Sergey; Ouvry, Stéphane

    2014-01-01

    When random walks on a square lattice are biased horizontally to move solely to the right, the probability distribution of their algebraic area can be obtained exactly (Mashkevich and Ouvry 2009 J. Stat. Phys. 137 71). We explicitly map this biased classical random system onto a non-Hermitian Hofstadter-like quantum model where a charged particle on a square lattice coupled to a perpendicular magnetic field hops only to the right. For the commensurate case, when the magnetic flux per unit cell is rational, an exact solution of the quantum model is obtained. The periodicity of the lattice allows one to relate traces of the Nth power of the Hamiltonian to probability distribution generating functions of biased walks of length N. (paper)

  6. Numerical simulation of a fractional model of temperature distribution and heat flux in the semi infinite solid

    Directory of Open Access Journals (Sweden)

    Anupama Choudhary

    2016-03-01

    Full Text Available In this paper, a fractional model for the computation of temperature and heat flux distribution in a semi-infinite solid is discussed which is subjected to spatially decomposing, time-dependent laser source. The apt dimensionless parameters are identified and the reduced temperature and heat flux as a function of these parameters are presented in a numerical form. Some special cases of practical interest are also discussed. The solution is derived by the application of the Laplace transform, the Fourier sine transform and their derivatives. Also, we developed an alternative solution of it by using the Sumudu transform, the Fourier transform and their derivatives. These results are received in compact and graceful forms in terms of the generalized Mittag-Leffler function, which are suitable for numerical computation.

  7. Distribution, regional sources and deposition fluxes of organochlorine pesticides in precipitation in Guangzhou, South China

    Science.gov (United States)

    Huang, De-Yin; Peng, Ping'an; Xu, Yi-Gang; Sun, Cui-Xiang; Deng, Hong-Mei; Deng, Yun-Yun

    2010-07-01

    We analyzed rainwater collected from multiple sites, Guangzhou, China, from March to August 2005, with the aim to characterize the distribution, regional sources and deposition fluxes of organochlorine pesticides (OCPs) in South China. Eight species of organochlorine pesticide were detected, including hexachlorocyclohexanes (HCHs), dichlorodiphenyltrichloroethanes (DDTs), and endosulfans. Volume-weighted mean monthly total concentrations varied from 3.65 ± 0.95 to 9.37 ± 2.63 ng L - 1 , and the estimated total wet deposition flux was about 11.43 ± 3.27 µg m - 2 during the monitoring period. Pesticides were mainly detected in the dissolved phase. Distribution coefficients between particulate and dissolved phases in March and April were generally higher than in other months. HCHs, p,p'-DDD and p,p'-DDT in precipitation were attributed to both the residues and present usage of insecticides in Pearl River Delta. The concentrations of p,p'-DDD + p,p'-DDT were relatively high from April to August, which were related to the usage of antifouling paints containing DDT for fishing ships in seaports of the South China Sea in summer. In contrast, endosulfans were relatively high in March, which was related to their seasonal atmospheric transport from cotton fields in eastern China by the Asian winter monsoon. The consistency of the variation of endosulfans, p,p'-DDD and p,p'-DDT concentrations with the alternation of summer and winter monsoon suggested that the Asian monsoon played an important role in the long-range transport of OCPs. In addition, the wet deposition of OCPs may influence not only Pearl River water but also the surface land distributions of pesticides in the Guangzhou area, especially for endosulfans, p,p'-DDD and p,p'-DDT.

  8. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    International Nuclear Information System (INIS)

    Greenhough, J; Chapman, S C; Dendy, R O; Ward, D J

    2003-01-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour

  9. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  10. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  11. Assignment of probability distributions for parameters in the 1996 performance assessment for the Waste Isolation Pilot Plant. Part 1: description of process

    International Nuclear Information System (INIS)

    Rechard, Rob P.; Tierney, Martin S.

    2005-01-01

    A managed process was used to consistently and traceably develop probability distributions for parameters representing epistemic uncertainty in four preliminary and the final 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The key to the success of the process was the use of a three-member team consisting of a Parameter Task Leader, PA Analyst, and Subject Matter Expert. This team, in turn, relied upon a series of guidelines for selecting distribution types. The primary function of the guidelines was not to constrain the actual process of developing a parameter distribution but rather to establish a series of well-defined steps where recognized methods would be consistently applied to all parameters. An important guideline was to use a small set of distributions satisfying the maximum entropy formalism. Another important guideline was the consistent use of the log transform for parameters with large ranges (i.e. maximum/minimum>10 3 ). A parameter development team assigned 67 probability density functions (PDFs) in the 1989 PA and 236 PDFs in the 1996 PA using these and other guidelines described

  12. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  13. Characterization of ion fluxes and heat fluxes for PMI relevant conditions on Proto-MPEX

    Science.gov (United States)

    Beers, Clyde; Shaw, Guinevere; Biewer, Theodore; Rapp, Juergen

    2016-10-01

    Plasma characterization, in particular, particle flux and electron and ion temperature distributions nearest to an exposed target, are critical to quantifying Plasma Surface Interaction (PSI). In the Proto-Material Plasma Exposure eXperiment (Proto-MPEX), the ion fluxes and heat fluxes are derived from double Langmuir Probes (DLP) and Thomson Scattering in front of the target assuming Bohm conditions at the sheath entrance. Power fluxes derived from ne and Te measurements are compared to heat fluxes measured with IR thermography. The comparison will allow conclusions on the sheath heat transmission coefficient to be made experimentally. Different experimental conditions (low and high density plasmas (0.5 - 6 x 1019 m-3) with different magnetic configuration are compared. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.

  14. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  15. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Shabbir, Aqsa

    2016-07-07

    In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). We employ probabilistic models for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. In exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional

  16. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    International Nuclear Information System (INIS)

    Shabbir, Aqsa

    2016-01-01

    In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). We employ probabilistic models for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. In exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional

  17. Understanding the anisotropic ion distributions within magnetotail dipolarizing flux bundles

    Science.gov (United States)

    Zhou, X.; Runov, A.; Angelopoulos, V.; Birn, J.

    2017-12-01

    Dipolarizing flux bundles (DFBs), earthward-propagating structures with enhanced northward magnetic field (Bz) component, are usually believed to carry a different plasma population from that in the ambient magnetotail plasma sheet. The ion distribution functions within the DFB, however, are recently found to be largely controlled by the ion adiabaticity parameter κ in the ambient plasma sheet outside the DFBs. According to these observations, the ambient κ values of 2-3 usually correspond to a strong perpendicular anisotropy of suprathermal ions within the DFBs, whereas for lower κ values the ions inside the DFBs become more isotropic. Here we utilize a simple, test-particle model to explore the nature of the anisotropy and its dependence on the ambient κ values. We find that the ion anisotropy originates from successive ion reflections and reentries to the DFBs, during which the ions can be consecutively accelerated in the perpendicular direction by the DFB-carried electric field. This acceleration process may be interrupted, however, when the magnetic field lines are highly curved in the ambient plasma sheet. In this case, the ion trajectories are most stochastic outside the DFB region, which makes the reflected ions less likely to return to the DFBs for another cycle of acceleration; as a consequence, the perpendicular ion anisotropy does not appear. Given that the DFB ions are a free energy source for instabilities when they are injected towards Earth, our simple model (that reproduces most observational features on the anisotropic DFB ion distributions) may shed new lights on the coupling process between the magnetotail and the inner magneosphere.

  18. Investigation of Probability Distributions Using Dice Rolling Simulation

    Science.gov (United States)

    Lukac, Stanislav; Engel, Radovan

    2010-01-01

    Dice are considered one of the oldest gambling devices and thus many mathematicians have been interested in various dice gambling games in the past. Dice have been used to teach probability, and dice rolls can be effectively simulated using technology. The National Council of Teachers of Mathematics (NCTM) recommends that teachers use simulations…

  19. Multi-flux-tube system in the dual Ginzburg-Landau theory

    International Nuclear Information System (INIS)

    Ichie, H.; Suganuma, H.; Toki, H.

    1996-01-01

    We study the multi-flux-tube system in terms of the dual Ginzburg-Landau theory. We consider two periodic cases, where the directions of all the flux tubes are the same in one case and alternating in the other case for neighboring flux tubes. We formulate the multi-flux-tube system by regarding it as the system of two flux tubes penetrating through a two-dimensional spherical surface. We find the multi-flux-tube configuration becomes uniform above some critical flux-tube number density ρ c =1.3 endash 1.7 fm -2 . On the other hand, the inhomogeneity of the color electric distribution appears when the flux-tube density is smaller than ρ c . We study the inhomogeneity on the color electric distribution in relation with the flux-tube number density, and discuss the quark-gluon plasma formation process in ultrarelativistic heavy-ion collisions. copyright 1996 The American Physical Society

  20. PODESY program for flux mapping of CNA II reactor:

    International Nuclear Information System (INIS)

    Ribeiro Guevara, Sergio

    1988-01-01

    The PODESY program, developed by KWU, calculates the spatial flux distribution of CNA II reactor through a three-dimensional expansion of 90 incore detector measurements. The calculation is made in three steps: a) short-term calculation which considers the control rod positions and it has to be done each time the flux mapping is calculated; b) medium-term calculation which includes local burn-up dependent calculation made by diffusion methods in macro-cell configurations (seven channels in hexagonal distribution), and c) long-term calculation, or macroscopic flux determination, that is a fitting and expansion of measured fluxes, previously corrected by local effects, using the eigen functions of the modified diffusion equation. The paper outlines development of step (c) of the calculation. The incore detectors have been located in the central zone of the core. In order to obtain low errors in the expansion procedure it is necessary to include additional points, whose flux values are assumed to be equivalent to detector measurements. These flux values are calculated with detector measurements and a spatial flux distribution calculated by a PUMA code. This PUMA calculation employs a smooth burn-up distribution (local burn-up variations are considered in step (b) of the whole calculation) representing the state of core evolution at the calculation time. The core evolution referred to ends when the equilibrium core condition is reached. Additionally, a calculation method to be employed in the plant in case of incore detector failures, is proposed. (Author) [es

  1. Graphical method for determination of critical dimensions and neutron flux distribution in multi zone nuclear reactors; Graficka metoda za odredjivanje kriticnih dimenzija i raspodele fluksa kod multiregionalnih nuklearnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Radanovic, Lj; Bingulac, S; Lazarevic, B; Matausek, M [The Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1964-07-01

    This paper describes the graphical method for calculating the neutron flux distribution by using normalized Riccati equations. It was shown that the solutions of adequately normalized Riccati equations could be used as standard curves for determining the critical dimensions and radial flux distribution in multi zone nuclear reactors. the methos is applicable irrelevant of the number and position of the region in the core.

  2. Quantifying Surface Energy Flux Estimation Uncertainty Using Land Surface Temperature Observations

    Science.gov (United States)

    French, A. N.; Hunsaker, D.; Thorp, K.; Bronson, K. F.

    2015-12-01

    Remote sensing with thermal infrared is widely recognized as good way to estimate surface heat fluxes, map crop water use, and detect water-stressed vegetation. When combined with net radiation and soil heat flux data, observations of sensible heat fluxes derived from surface temperatures (LST) are indicative of instantaneous evapotranspiration (ET). There are, however, substantial reasons LST data may not provide the best way to estimate of ET. For example, it is well known that observations and models of LST, air temperature, or estimates of transport resistances may be so inaccurate that physically based model nevertheless yield non-meaningful results. Furthermore, using visible and near infrared remote sensing observations collected at the same time as LST often yield physically plausible results because they are constrained by less dynamic surface conditions such as green fractional cover. Although sensitivity studies exist that help identify likely sources of error and uncertainty, ET studies typically do not provide a way to assess the relative importance of modeling ET with and without LST inputs. To better quantify model benefits and degradations due to LST observational inaccuracies, a Bayesian uncertainty study was undertaken using data collected in remote sensing experiments at Maricopa, Arizona. Visible, near infrared and thermal infrared data were obtained from an airborne platform. The prior probability distribution of ET estimates were modeled using fractional cover, local weather data and a Penman-Monteith mode, while the likelihood of LST data was modeled from a two-source energy balance model. Thus the posterior probabilities of ET represented the value added by using LST data. Results from an ET study over cotton grown in 2014 and 2015 showed significantly reduced ET confidence intervals when LST data were incorporated.

  3. Flux depression and the absolute measurement of the thermal neutron flux density

    International Nuclear Information System (INIS)

    Bensch, Friedrich.

    1977-01-01

    The thermal neutron flux depression in a diffusing medium by an absorbing foil has been treated in numerous papers. The results are re-examined in an attempt to find a uniform and physically meaningful representation of the 'activation correction'. This quantity can be split up into a combination of probabilities. Thus, it is possible to determine the activation correction for any moderator and foil material. Measurements confirm the utility of the concepts introduced

  4. Performance Probability Distributions for Sediment Control Best Management Practices

    Science.gov (United States)

    Ferrell, L.; Beighley, R.; Walsh, K.

    2007-12-01

    Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with

  5. A discussion on the origin of quantum probabilities

    International Nuclear Information System (INIS)

    Holik, Federico; Sáenz, Manuel; Plastino, Angel

    2014-01-01

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivation of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases

  6. Scanning micro-Hall probe mapping of magnetic flux distributions and current densities in YBa{sub 2}Cu{sub 3}O{sub 7}

    Energy Technology Data Exchange (ETDEWEB)

    Xing, W.; Heinrich, B. [Simon Fraser Univ., British Columbia (Canada); Zhou, H. [CTF Systems, Inc., British Columbia (Canada)] [and others

    1994-12-31

    Mapping of the magnetic flux density B{sub z} (perpendicular to the film plane) for a YBa{sub 2}Cu{sub 3}O{sub 7} thin-film sample was carried out using a scanning micro-Hall probe. The sheet magnetization and sheet current densities were calculated from the B{sub z} distributions. From the known sheet magnetization, the tangential (B{sub x,y}) and normal components of the flux density B were calculated in the vicinity of the film. It was found that the sheet current density was mostly determined by 2B{sub x,y}/d, where d is the film thickness. The evolution of flux penetration as a function of applied field will be shown.

  7. Flux-ratio anomalies from discs and other baryonic structures in the Illustris simulation

    Science.gov (United States)

    Hsueh, Jen-Wei; Despali, Giulia; Vegetti, Simona; Xu, Dandan; Fassnacht, Christopher D.; Metcalf, R. Benton

    2018-04-01

    The flux ratios in the multiple images of gravitationally lensed quasars can provide evidence for dark matter substructure in the halo of the lensing galaxy if the flux ratios differ from those predicted by a smooth model of the lensing galaxy mass distribution. However, it is also possible that baryonic structures in the lensing galaxy, such as edge-on discs, can produce flux-ratio anomalies. In this work, we present the first statistical analysis of flux-ratio anomalies due to baryons from a numerical simulation perspective. We select galaxies with various morphological types in the Illustris simulation and ray trace through the simulated haloes, which include baryons in the main lensing galaxies but exclude any substructures, in order to explore the pure baryonic effects. Our ray-tracing results show that the baryonic components can be a major contribution to the flux-ratio anomalies in lensed quasars and that edge-on disc lenses induce the strongest anomalies. We find that the baryonic components increase the probability of finding high flux-ratio anomalies in the early-type lenses by about 8 per cent and by about 10-20 per cent in the disc lenses. The baryonic effects also induce astrometric anomalies in 13 per cent of the mock lenses. Our results indicate that the morphology of the lens galaxy becomes important in the analysis of flux-ratio anomalies when considering the effect of baryons, and that the presence of baryons may also partially explain the discrepancy between the observed (high) anomaly frequency and what is expected due to the presence of subhaloes as predicted by the cold dark matter simulations.

  8. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  9. Measurement of neutron flux distribution by semiconductor detector; merenje raspodele neutronskog fluksa poluprovodnickim detektorom

    Energy Technology Data Exchange (ETDEWEB)

    Obradovic, D; Bosevski, T [Institut za nuklearne nauke Boris Kidric, Vinca, Beograd (Yugoslavia)

    1964-07-01

    Application of semiconductor detectors for measuring neutron flux distribution is about 10 times faster than measurements by activation foils and demands significantly lower reactor power. Following corrections are avoided: mass of activation foils which influences the self shielding, nuclear decay during activity measurements; counter dead-time. It is possible to control the measured data during experiment and repeat measurements if needed. Precision of the measurement is higher since it is possible to choose the wanted statistics. The method described in this paper is applied for measurements at the RB reactor. It is concluded that the method is suitable for fast measurements but the activation analysis is still indispensable.

  10. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    Science.gov (United States)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  11. Temporal and spatial variations of soil carbon dioxide, methane, and nitrous oxide fluxes in a Southeast Asian tropical rainforest

    Science.gov (United States)

    Itoh, M.; Kosugi, Y.; Takanashi, S.; Hayashi, Y.; Kanemitsu, S.; Osaka, K.; Tani, M.; Nik, A. R.

    2010-09-01

    To clarify the factors controlling temporal and spatial variations of soil carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) fluxes, we investigated these gas fluxes and environmental factors in a tropical rainforest in Peninsular Malaysia. Temporal variation of CO2 flux in a 2-ha plot was positively related to soil water condition and rainfall history. Spatially, CO2 flux was negatively related to soil water condition. When CO2 flux hotspots were included, no other environmental factors such as soil C or N concentrations showed any significant correlation. Although the larger area sampled in the present study complicates explanations of spatial variation of CO2 flux, our results support a previously reported bipolar relationship between the temporal and spatial patterns of CO2 flux and soil water condition observed at the study site in a smaller study plot. Flux of CH4 was usually negative with little variation, resulting in the soil at our study site functioning as a CH4 sink. Both temporal and spatial variations of CH4 flux were positively related to the soil water condition. Soil N concentration was also related to the spatial distribution of CH4 flux. Some hotspots were observed, probably due to CH4 production by termites, and these hotspots obscured the relationship between both temporal and spatial variations of CH4 flux and environmental factors. Temporal variation of N2O flux and soil N2O concentration was large and significantly related to the soil water condition, or in a strict sense, to rainfall history. Thus, the rainfall pattern controlled wet season N2O production in soil and its soil surface flux. Spatially, large N2O emissions were detected in wet periods at wetter and anaerobic locations, and were thus determined by soil physical properties. Our results showed that, even in Southeast Asian rainforests where distinct dry and wet seasons do not exist, variation in the soil water condition related to rainfall history controlled the

  12. Viscous flux flow velocity and stress distribution in the Kim model of a long rectangular slab superconductor

    Science.gov (United States)

    Yang, Yong; Chai, Xueguang

    2018-05-01

    When a bulk superconductor endures the magnetization process, enormous mechanical stresses are imposed on the bulk, which often leads to cracking. In the present work, we aim to resolve the viscous flux flow velocity υ 0/w, i.e. υ 0 (because w is a constant) and the stress distribution in a long rectangular slab superconductor for the decreasing external magnetic field (B a ) after zero-field cooling (ZFC) and field cooling (FC) using the Kim model and viscous flux flow equation simultaneously. The viscous flux flow velocity υ 0/w and the magnetic field B* at which the body forces point away in all of the slab volumes during B a reduction, are determined by both B a and the decreasing rate (db a /dt) of the external magnetic field normalized by the full penetration field B p . In previous studies, υ 0/w obtained by the Bean model with viscous flux flow is only determined by db a /dt, and the field B* that is derived only from the Kim model is a positive constant when the maximum external magnetic field is chosen. This means that the findings in this paper have more physical contents than the previous results. The field B* stress changing with decreasing field B a after ZFC if B* ≤ 0. The effect of db a /dt on the stress is significant in the cases of both ZFC and FC.

  13. Subjective Probabilities for State-Dependent Continuous Utility

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1987-01-01

    textabstractFor the expected utility model with state dependent utilities, Karni, Schmeidler and Vind (1983) have shown how to recover uniquely the involved subjective probabilities if the preferences, contingent on a hypothetical probability distribution over the state space, are known. This they

  14. Extinction probabilities and stationary distributions of mobile genetic elements in prokaryotes: The birth-death-diversification model.

    Science.gov (United States)

    Drakos, Nicole E; Wahl, Lindi M

    2015-12-01

    Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Evaluation of magnetic flux distribution from magnetic domains in [Co/Pd] nanowires by magnetic domain scope method using contact-scanning of tunneling magnetoresistive sensor

    Energy Technology Data Exchange (ETDEWEB)

    Okuda, Mitsunobu, E-mail: okuda.m-ky@nhk.or.jp; Miyamoto, Yasuyoshi; Miyashita, Eiichi; Hayashi, Naoto [NHK Science and Technology Research Laboratories, 1-10-11 Kinuta Setagaya, Tokyo 157-8510 (Japan)

    2014-05-07

    Current-driven magnetic domain wall motions in magnetic nanowires have attracted great interests for physical studies and engineering applications. The magnetic force microscope (MFM) is widely used for indirect verification of domain locations in nanowires, where relative magnetic force between the local domains and the MFM probe is used for detection. However, there is an occasional problem that the magnetic moments of MFM probe influenced and/or rotated the magnetic states in the low-moment nanowires. To solve this issue, the “magnetic domain scope for wide area with nano-order resolution (nano-MDS)” method has been proposed recently that could detect the magnetic flux distribution from the specimen directly by scanning of tunneling magnetoresistive field sensor. In this study, magnetic domain structure in nanowires was investigated by both MFM and nano-MDS, and the leakage magnetic flux density from the nanowires was measured quantitatively by nano-MDS. Specimen nanowires consisted from [Co (0.3)/Pd (1.2)]{sub 21}/Ru(3) films (units in nm) with perpendicular magnetic anisotropy were fabricated onto Si substrates by dual ion beam sputtering and e-beam lithography. The length and the width of the fabricated nanowires are 20 μm and 150 nm. We have succeeded to obtain not only the remanent domain images with the detection of up and down magnetizations as similar as those by MFM but also magnetic flux density distribution from nanowires directly by nano-MDS. The obtained value of maximum leakage magnetic flux by nano-MDS is in good agreement with that of coercivity by magneto-optical Kerr effect microscopy. By changing the protective diamond-like-carbon film thickness on tunneling magnetoresistive sensor, the three-dimensional spatial distribution of leakage magnetic flux could be evaluated.

  16. Optimal design of unit hydrographs using probability distribution and ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    optimization formulation is solved using binary-coded genetic algorithms. The number of variables to ... Unit hydrograph; rainfall-runoff; hydrology; genetic algorithms; optimization; probability ..... Application of the model. Data derived from the ...

  17. A prediction method of the effect of radial heat flux distribution on critical heat flux in CANDU fuel bundles

    International Nuclear Information System (INIS)

    Yuan, Lan Qin; Yang, Jun; Harrison, Noel

    2014-01-01

    Fuel irradiation experiments to study fuel behaviors have been performed in the experimental loops of the National Research Universal (NRU) Reactor at Atomic Energy of Canada Limited (AECL) Chalk River Laboratories (CRL) in support of the development of new fuel technologies. Before initiating a fuel irradiation experiment, the experimental proposal must be approved to ensure that the test fuel strings put into the NRU loops meet safety margin requirements in critical heat flux (CHF). The fuel strings in irradiation experiments can have varying degrees of fuel enrichment and burnup, resulting in large variations in radial heat flux distribution (RFD). CHF experiments performed in Freon flow at CRL for full-scale bundle strings with a number of RFDs showed a strong effect of RFD on CHF. A prediction method was derived based on experimental CHF data to account for the RFD effect on CHF. It provides good CHF predictions for various RFDs as compared to the data. However, the range of the tested RFDs in the CHF experiments is not as wide as that required in the fuel irradiation experiments. The applicability of the prediction method needs to be examined for the RFDs beyond the range tested by the CHF experiments. The Canadian subchannel code ASSERT-PV was employed to simulate the CHF behavior for RFDs that would be encountered in fuel irradiation experiments. The CHF predictions using the derived method were compared with the ASSERT simulations. It was observed that the CHF predictions agree well with the ASSERT simulations in terms of CHF, confirming the applicability of the prediction method in fuel irradiation experiments. (author)

  18. Study on the radiation flux and temperature distributions of the concentrator-receiver system in a solar dish/Stirling power facility

    International Nuclear Information System (INIS)

    Li Zhigang; Tang Dawei; Du Jinglong; Li Tie

    2011-01-01

    Uniform heater temperature and high optical-thermal efficiency are crucial for the reliable and economical operation of a Solar Dish/Stirling engine facility. The Monte-Carlo ray-tracing method is utilized to predict the radiation flux distributions of the concentrator-receiver system. The ray-tracing method is first validated by experiment, then the radiation flux profiles on the solar receiver surface for faceted real concentrator and ideal paraboloidal concentrator, irradiated by Xe-arc lamps and real sun, for different aperture positions and receiver shapes are analyzed, respectively. The resulted radiation flux profiles are subsequently transferred to a CFD code as boundary conditions to numerically simulate the fluid flow and conjugate heat transfer in the receiver cavity by coupling the radiation, natural convection and heat conduction together, and the CFD method is also validated through experiment. The results indicate that a faceted concentrator in combination with a solar simulator composed of 12 Xe-arc lamps is advantageous to drive the solar Stirling engine for all-weather indoor tests. Based on the simulation results, a solar receiver-Stirling heater configuration is designed to achieve a considerably uniform temperature distribution on the heater head tubes while maintaining a high efficiency of 60.7%. - Highlights: → Radiation flux in Dish/Stirling system is analyzed by validated ray-tracing method. → Temperature field on the solar receiver is analyzed by a validated CFD method. → Effects of Xe-arc lamp solar simulator and faceted real concentrator are analyzed. → Effects of different receiver positions and receiver shapes are investigated. → A Stirling heater configuration is presented with uniform temperature field.

  19. A one-dimensional, one-group absorption-production nodal method for neutron flux and power distributions calculations

    International Nuclear Information System (INIS)

    Ferreira, C.R.

    1984-01-01

    It is presented the absorption-production nodal method for steady and dynamical calculations in one-dimension and one group energy. It was elaborated the NOD1D computer code (in FORTRAN-IV language). Calculations of neutron flux and power distributions, burnup, effective multiplication factors and critical boron concentration were made with the NOD1D code and compared with results obtained through the CITATION code, which uses the finite difference method. The nuclear constants were produced by the LEOPARD code. (M.C.K.) [pt

  20. The Bayesian count rate probability distribution in measurement of ionizing radiation by use of a ratemeter

    Energy Technology Data Exchange (ETDEWEB)

    Weise, K.

    2004-06-01

    Recent metrological developments concerning measurement uncertainty, founded on Bayesian statistics, give rise to a revision of several parts of the DIN 25482 and ISO 11929 standard series. These series stipulate detection limits and decision thresholds for ionizing-radiation measurements. Part 3 and, respectively, part 4 of them deal with measurements by use of linear-scale analogue ratemeters. A normal frequency distribution of the momentary ratemeter indication for a fixed count rate value is assumed. The actual distribution, which is first calculated numerically by solving an integral equation, differs, however, considerably from the normal distribution although this one represents an approximation of it for sufficiently large values of the count rate to be measured. As is shown, this similarly holds true for the Bayesian probability distribution of the count rate for sufficiently large given measured values indicated by the ratemeter. This distribution follows from the first one mentioned by means of the Bayes theorem. Its expectation value and variance are needed for the standards to be revised on the basis of Bayesian statistics. Simple expressions are given by the present standards for estimating these parameters and for calculating the detection limit and the decision threshold. As is also shown, the same expressions can similarly be used as sufficient approximations by the revised standards if, roughly, the present indicated value exceeds the reciprocal ratemeter relaxation time constant. (orig.)