WorldWideScience

Sample records for statistical multifragmentation models

  1. The statistical multifragmentation model: Origins and recent advances

    International Nuclear Information System (INIS)

    Donangelo, R.; Souza, S. R.

    2016-01-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  2. The statistical multifragmentation model: Origins and recent advances

    Energy Technology Data Exchange (ETDEWEB)

    Donangelo, R., E-mail: donangel@fing.edu.uy [Instituto de Física, Facultad de Ingeniería, Universidad de la República, Julio Herrera y Reissig 565, 11300, Montevideo (Uruguay); Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Souza, S. R., E-mail: srsouza@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Instituto de Física, Universidade Federal do Rio Grande do Sul, C.P. 15051, 91501-970 Porto Alegre - RS (Brazil)

    2016-07-07

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  3. The statistical decay of very hot nuclei: from sequential decay to multifragmentation

    International Nuclear Information System (INIS)

    Carlson, B.V.; Donangelo, R.; Universidad de la Republica, Montevideo; Souza, S.R.; Universidade Federal do Rio Grande do Sul; Lynch, W.G.; Steiner, A.W.; Tsang, M.B.

    2010-01-01

    Full text. At low excitation energies, the compound nucleus typically decays through the sequential emission of light particles. As the energy increases, the emission probability of heavier fragments increases until, at sufficiently high energies, several heavy complex fragments are emitted during the decay. The extent to which this fragment emission is simultaneous or sequential has been a subject of theoretical and experimental study for almost 30 years. The Statistical Multifragmentation Model, an equilibrium model of simultaneous fragment emission, uses the configurations of a statistical ensemble to determine the distribution of primary fragments of a compound nucleus. The primary fragments are then assumed to decay by sequential compound emission or Fermi breakup. As the first step toward a more unified model of these processes, we demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical version of the statistical multifragmentation model. We then establish a link between this unified Fermi breakup / statistical multifragmentation model and the well-known process of compound nucleus emission, which permits to consider simultaneous and sequential emission on the same footing. Within this unified framework, we analyze the increasing importance of simultaneous, multifragment decay with increasing excitation energy and decreasing lifetime of the compound nucleus. (author)

  4. Quantum statistical model of nuclear multifragmentation in the canonical ensemble method

    International Nuclear Information System (INIS)

    Toneev, V.D.; Ploszajczak, M.; Parvant, A.S.; Toneev, V.D.; Parvant, A.S.

    1999-01-01

    A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)

  5. Quantum statistical model of nuclear multifragmentation in the canonical ensemble method

    Energy Technology Data Exchange (ETDEWEB)

    Toneev, V.D.; Ploszajczak, M. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Parvant, A.S. [Institute of Applied Physics, Moldova Academy of Sciences, MD Moldova (Ukraine); Parvant, A.S. [Joint Institute for Nuclear Research, Bogoliubov Lab. of Theoretical Physics, Dubna (Russian Federation)

    1999-07-01

    A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)

  6. Analysis of multi-fragmentation reactions induced by relativistic heavy ions using the statistical multi-fragmentation model

    Energy Technology Data Exchange (ETDEWEB)

    Ogawa, T., E-mail: ogawa.tatsuhiko@jaea.go.jp [Research Group for Radiation Protection, Division of Environment and Radiation Sciences, Nuclear Science and Engineering Directorate, Japan Atomic Energy Agency, Shirakata-Shirane, Tokai, Ibaraki 319-1195 (Japan); Sato, T.; Hashimoto, S. [Research Group for Radiation Protection, Division of Environment and Radiation Sciences, Nuclear Science and Engineering Directorate, Japan Atomic Energy Agency, Shirakata-Shirane, Tokai, Ibaraki 319-1195 (Japan); Niita, K. [Research Organization for Information Science and Technology, Shirakata-shirane, Tokai, Ibaraki 319-1188 (Japan)

    2013-09-21

    The fragmentation cross-sections of relativistic energy nucleus–nucleus collisions were analyzed using the statistical multi-fragmentation model (SMM) incorporated with the Monte-Carlo radiation transport simulation code particle and heavy ion transport code system (PHITS). Comparison with the literature data showed that PHITS-SMM reproduces fragmentation cross-sections of heavy nuclei at relativistic energies better than the original PHITS by up to two orders of magnitude. It was also found that SMM does not degrade the neutron production cross-sections in heavy ion collisions or the fragmentation cross-sections of light nuclei, for which SMM has not been benchmarked. Therefore, SMM is a robust model that can supplement conventional nucleus–nucleus reaction models, enabling more accurate prediction of fragmentation cross-sections.

  7. Analysis of multi-fragmentation reactions induced by relativistic heavy ions using the statistical multi-fragmentation model

    International Nuclear Information System (INIS)

    Ogawa, T.; Sato, T.; Hashimoto, S.; Niita, K.

    2013-01-01

    The fragmentation cross-sections of relativistic energy nucleus–nucleus collisions were analyzed using the statistical multi-fragmentation model (SMM) incorporated with the Monte-Carlo radiation transport simulation code particle and heavy ion transport code system (PHITS). Comparison with the literature data showed that PHITS-SMM reproduces fragmentation cross-sections of heavy nuclei at relativistic energies better than the original PHITS by up to two orders of magnitude. It was also found that SMM does not degrade the neutron production cross-sections in heavy ion collisions or the fragmentation cross-sections of light nuclei, for which SMM has not been benchmarked. Therefore, SMM is a robust model that can supplement conventional nucleus–nucleus reaction models, enabling more accurate prediction of fragmentation cross-sections

  8. Multifragmentation: New dynamics or old statistics?

    International Nuclear Information System (INIS)

    Moretto, L.G.; Delis, D.N.; Wozniak, G.J.

    1993-10-01

    The understanding of the fission process as it has developed over the last fifty years has been applied to multifragmentation. Two salient aspects have been discovered: 1) a strong decoupling of the entrance and exit channels with the formation of well-characterized sources: 2) a statistical competition between two-, three-, four-, five-, ... n-body decays

  9. Experimental signature for statistical multifragmentation

    International Nuclear Information System (INIS)

    Moretto, L.G.; Delis, D.N.; Wozniak, G.J.

    1993-01-01

    Multifragment production was measured for the 60 MeV/nucleon 197 Au+ 27 Al, 51 V, and nat Cu reactions. The branching ratios for binary, ternary, quaternary, and quinary decays were determined as a function of the excitation energy E and are independent of the target. The logarithms of these branching ratios when plotted vs E -1/2 show a linear dependence that strongly suggests a statistical competition between the various multifragmentation channels. This behavior seems to relegate the role of dynamics to the formation of the sources, which then proceed to decay in an apparently statistical manner

  10. Evaluation of observables in statistical multifragmentation theories

    International Nuclear Information System (INIS)

    Cole, A.J.

    1989-01-01

    The canonical formulation of equilibrium statistical multifragmentation is examined. It is shown that the explicit construction of observables (average values) by sampling the partition probabilities is unnecessary insofar as closed expressions in the form of recursion relations can be obtained quite easily. Such expressions may conversely be used to verify the sampling algorithms

  11. Multifragmentation model for astrophysical strangelets

    International Nuclear Information System (INIS)

    Biswas, Sayan; De, J.N.; Joarder, Partha S.; Raha, Sibaji; Syam, Debapriyo

    2012-01-01

    A model for the possible size distribution of astrophysical strangelets, that fragment out of the warm strange quark matter ejected during the merger of binary strange stars in the Galaxy, is presented here by invoking the statistical multifragmentation model. A simplified assumption of zero quark mass has been considered to obtain such mass-spectrum for the strangelets. An approximate estimate for the intensity of such strangelets in the galactic cosmic rays is also attempted by using a diffusion approximation.

  12. Multifragmentation: Surface instabilities or statistical decay

    International Nuclear Information System (INIS)

    Moretto, L.G.; Tso, K.; Delis, D.; Colonna, N.; Wozniak, G.J.

    1992-11-01

    Boltzmann-Nordheim-Vlasov calculations show multifragmentation that seems to originate from surface instabilities. These instabilities are traced to a sheet instability caused by the proximity interaction. Experimental data, on the other hand, suggest that multifragmentation may be dominated by phase space

  13. Multifragmentation: surface instabilities or statistical decay?

    International Nuclear Information System (INIS)

    Moretto, L.G.; Tso, K.; Delis, D.; Colonna, N.; Wozniak, G.J.

    1993-01-01

    Boltzmann-Nordheim-Vlasov calculations show multifragmentation that seems to originate from surface instabilities. These instabilities are traced to a sheet instability caused by the proximity interaction. Experimental data, on the other hand, suggest that multifragmentation may be dominated by phase space. (author)

  14. Nuclear multifragmentation within the framework of different statistical ensembles

    International Nuclear Information System (INIS)

    Aguiar, C.E.; Donangelo, R.; Souza, S.R.

    2006-01-01

    The sensitivity of the statistical multifragmentation model to the underlying statistical assumptions is investigated. We concentrate on its microcanonical, canonical, and isobaric formulations. As far as average values are concerned, our results reveal that all the ensembles make very similar predictions, as long as the relevant macroscopic variables (such as temperature, excitation energy, and breakup volume) are the same in all statistical ensembles. It also turns out that the multiplicity dependence of the breakup volume in the microcanonical version of the model mimics a system at (approximately) constant pressure, at least in the plateau region of the caloric curve. However, in contrast to average values, our results suggest that the distributions of physical observables are quite sensitive to the statistical assumptions. This finding may help in deciding which hypothesis corresponds to the best picture for the freeze-out stage

  15. Statistical analysis of experimental multifragmentation events in 64Zn+112Sn at 40 MeV/nucleon

    Science.gov (United States)

    Lin, W.; Zheng, H.; Ren, P.; Liu, X.; Huang, M.; Wada, R.; Chen, Z.; Wang, J.; Xiao, G. Q.; Qu, G.

    2018-04-01

    A statistical multifragmentation model (SMM) is applied to the experimentally observed multifragmentation events in an intermediate heavy-ion reaction. Using the temperature and symmetry energy extracted from the isobaric yield ratio (IYR) method based on the modified Fisher model (MFM), SMM is applied to the reaction 64Zn+112Sn at 40 MeV/nucleon. The experimental isotope distribution and mass distribution of the primary reconstructed fragments are compared without afterburner and they are well reproduced. The extracted temperature T and symmetry energy coefficient asym from SMM simulated events, using the IYR method, are also consistent with those from the experiment. These results strongly suggest that in the multifragmentation process there is a freezeout volume, in which the thermal and chemical equilibrium is established before or at the time of the intermediate-mass fragments emission.

  16. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    Science.gov (United States)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  17. Conditions for equivalence of statistical ensembles in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, Swagata; Chaudhuri, Gargi

    2012-01-01

    Statistical models based on canonical and grand canonical ensembles are extensively used to study intermediate energy heavy-ion collisions. The underlying physical assumption behind canonical and grand canonical models is fundamentally different, and in principle agree only in the thermodynamical limit when the number of particles become infinite. Nevertheless, we show that these models are equivalent in the sense that they predict similar results if certain conditions are met even for finite nuclei. In particular, the results converge when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. The conditions under which the equivalence holds are amenable to present day experiments.

  18. Branching ratios in sequential statistical multifragmentation

    International Nuclear Information System (INIS)

    Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.

    1995-01-01

    The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p=exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (authors). 22 refs., 5 figs

  19. Branching ratios in sequential statistical multifragmentation

    International Nuclear Information System (INIS)

    Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.

    1995-01-01

    The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p = exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (author). 22 refs., 5 figs

  20. Isoscaling parameter in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, S.; Chaudhuri, G.

    2012-01-01

    The multifragmentation stage is studied by the Canonical Thermodynamical Model which is based on equilibrium statistical mechanics and involves the calculation of partition functions. The decay of excited fragments produced after multifragmentation stage is calculated by evaporation model based on Weisskopf's formalism. To study the temperature and symmetry energy dependence of α, the study has taken the dissociating systems as A 1 = 168, Z 1 = 75 and A 2 = 186, Z 2 = 75. These will represent 112 Sn + 112 Sn and 124 Sn + 124 Sn central collisions after pre-equilibrium particles are emitted

  1. WIX: statistical nuclear multifragmentation with collective expansion and Coulomb forces

    Science.gov (United States)

    Randrup, J.∅rgen

    1993-10-01

    By suitable augmentation of the event generator FREESCO, a code WIX has been constructed with which it is possible to simulate the statistical multifragmentation of a specified nuclear source, which may be both hollow and deformed, in the presence of a collective expansion and with the interfragment Coulomb forces included.

  2. A phenomenological model for nuclear multifragmentation

    International Nuclear Information System (INIS)

    Souza, S.R.; Leray, S.; Paula, L. de; Nemeth, J.; Ngo, C.; CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette; Ngo, H.

    1992-01-01

    A phenomenological model for nuclear multifragmentation is presented. It is made up of two complementary parts: molecular dynamics and restructured aggregation. It is applied to study the multifragmentation of 16 O+ 80 Br system at several bombarding energies. The results turn out to be in good agreement with available emulsion data. The production of charged particles and IMF as a function of the bombarding energy is also studied. The results seem to agree quite well with experimental observations and with previous results of other model calculations. (author) 19 refs.; 5 figs.; 1 tab

  3. Multifragmentation induced by light relativistic projectiles and heavy ions: similarities and differences

    International Nuclear Information System (INIS)

    Karnaukhov, V.A.; Avdeev, S.P.; Kuznetsov, V.D.

    1998-01-01

    The experimental data on fragment multiplicities, their energy and charge distributions, the emission times are considered for the nuclear multifragmentation process induced by relativistic light projectiles (protons, helium) and heavy ions. With light projectiles, the multifragmentation is a pure 'thermal' process, well described by the statistical models. Heavy-ion-induced multifragmentation is influenced by dynamic effects related first of all to the compression of the system in the collision. But statistical models can also be applied to rendering the partition of the system if the excitation energy is less than 10 MeV/nucleon and compression is modest. For the central collision of heavy ions the statistical approach fails to describe the data

  4. Shear viscosity to entropy density ratio in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Pal, Subrata

    2010-01-01

    Nuclear multifragmentation in intermediate-energy heavy-ion collisions has long been associated with liquid-gas phase transition. We calculate the shear viscosity to entropy density ratio η/s for an equilibrated system of nucleons and fragments produced in multifragmentation within an extended statistical multifragmentation model. The temperature dependence of η/s exhibits behavior surprisingly similar to that of H 2 O. In the coexistence phase of fragments and light particles, the ratio η/s reaches a minimum of depth comparable to that for water in the vicinity of the critical temperature for liquid-gas phase transition. The effects of freeze-out volume and surface symmetry energy on η/s in multifragmentation are studied.

  5. 'Thermal' multifragmentation induced in gold target by relativistic protons

    International Nuclear Information System (INIS)

    Karnaukhov, V.A.; Avdeev, S.P.; Kuznetsov, V.D.

    1996-01-01

    Multifragmentation in p+Au collisions at 2.16, 3.6 and 8.1 GeV has been studied with the FASA set-up. The mean IMF-multiplicities (2.0, 2.6 and 3.0) are comparable with those obtained with heavy ions. The modified Glauber approximation, followed by the statistical multifragmentation model, is used to describe the data on the fragment multiplicities and energy spectra. 25 refs., 4 figs., 1 tab

  6. Multifragmentation in relativistic heavy ion reactions

    International Nuclear Information System (INIS)

    Trautmann, W.

    1996-11-01

    Multifragmentation is the dominant decay mode of heavy nuclear systems with excitation energies in the vicinity of their binding energies. It explores the partition space associated with the number of nucleonic constituents and it is characterized by a multiple production of nuclear fragments with intermediate mass. Reactions at relativistic bombarding energies, exceeding several hundreds of MeV per nucleon, have been found very efficient in creating such highly excited systems. Peripheral collisions of heavy symmetric systems or more central collisions of mass asymmetric systems produce spectator nuclei with properties indicating a high degree of equilibration. The observed decay patterns are well described by statistical multifragmentation models. The present experimental and theoretical studies are particularly motivated by the fact that multifragmentation is being considered a possible manifestation of the liquid-gas phase transition in finite nuclear systems. From the simultaneous measurement of the temperature and of the energy content of excited spectator systems a caloric curve of nuclei has been obtained. The characteristic S-shaped behavior resembles that of ordinary liquids. Signatures of critical phenomena in finite nuclear systems are searched for in multifragmentation data. These studies, supported by the success of percolation in reproducing the experimental mass or charge correlations, concentrate on the fluctuations observed in these observables. Attempts have been made to deduce critical-point exponents associated with multifragmentation. (orig.)

  7. 'Thermal' multifragmentation in p + Au collisions at relativistic energies

    International Nuclear Information System (INIS)

    Avdeev, S.P.; Karnaukhov, V.A.; Kuznetsov, V.D.

    1997-01-01

    Multiple emission of intermediate-mass fragments has been studied for the collisions p + Au at 2.16, 3.6 and 8.1 GeV with the FASA set-up. The mean IMF multiplicities are equal to 1.7, 1.9 and 2.1 (±0.2) respectively. The multiplicity, charge distributions and kinetic energy spectra of IMF are described in the framework of the empirically modified intranuclear cascade model followed by the statistical multifragmentation model. The results support a scenario of true thermal multifragmentation of a hot and expanded target spectator

  8. Statistical view on nuclear multifragmentation: Primary decays

    International Nuclear Information System (INIS)

    Raduta, A.H.; Raduta, A.R.

    1997-01-01

    An overall view on the universe of primary decays appearing in the process of nuclear multifragmentation via a microcanonical Monte Carlo Metropolis type simulation is given. General characteristics like mass and charge distributions, relative probabilities of evaporation, fission, fragmentation and vaporization, average number of fragments and distributions of a number of intermediate mass fragments offer valuable information about the intimacy of the process. The capability of the model to describe unitary very different breakup regimes is pointed out. Predictions for charge distributions, isotopic yields, and fission mass distributions are compared with experimental data. copyright 1997 The American Physical Society

  9. Statistical multifragmentation: Is the distinction between simultaneous and sequential decay inessential?

    International Nuclear Information System (INIS)

    Moretto, L.G.; Delis, D.N.; Wozniak, G.J.

    1994-01-01

    Recently, some experimental work has succeeded in isolating and characterizing what appear to be true multifragmentation sources formed in reverse kinematics reactions. These sources are formed in a process akin to incomplete fusion, whereby one partner of the collision picks up, and fuses with, a variable portion of the other partner. From the kinematics of the event, it is possible to determine how much mass has been picked up and what is the excitation energy associated with the fused object. Surprisingly, these sources, once characterized as described above, undergo multifragment decay in a way that is singularly independent of the formation process. The observed branching ratios for binary, ternary, quaternary, and quinary decays seem to depend almost exclusively upon the excitation energy E of the fused object, and remarkably little upon the target-projectile combination or even the bombarding energy. A powerful method is devised to verify the statistical competition between two-, three-, four-, and n-body decays. It is shown that under rather general conditions, the simultaneous n-body multifragmentation probability can be reduced to the n-1 step sequential probability

  10. Non-extensive statistical aspects of clustering and nuclear multi-fragmentation

    International Nuclear Information System (INIS)

    Calboreanu, A.

    2002-01-01

    Recent developments concerning an application of the non-extensive Tsalis statistics to describe clustering phenomena is briefly presented. Cluster formation is a common feature of a large number of physical phenomena encountered in molecular and nuclear physics, astrophysics, condensed matter and biophysics. Common to all these is the large number of degrees of freedom, thus justifying a statistical approach. However the conventional statistical mechanics paradigm seems to fail in dealing with clustering. Whether this is due to the prevalence of complex dynamical constrains, or it is a manifestation of new statistics is a subject of considerable interest, which was intensively debated during the last few years. Tsalis conjecture has proved extremely appealing due to its rather elegant and transparent basic arguments. We present here evidence for its adequacy for the study of a large class of physical phenomena related to cluster formation. An application to nuclear multi-fragmentation is presented. (author)

  11. Aspects of statistical model for multifragmentation

    International Nuclear Information System (INIS)

    Bhattacharyya, P.; Das Gupta, S.; Mekjian, A. Z.

    1999-01-01

    We deal with two different aspects of an exactly soluble statistical model of fragmentation. First we show, using zero range force and finite temperature Thomas-Fermi theory, that a common link can be found between finite temperature mean field theory and the statistical fragmentation model. We show the latter naturally arises in the spinodal region. Next we show that although the exact statistical model is a canonical model and uses temperature, microcanonical results which use constant energy rather than constant temperature can also be obtained from the canonical model using saddle-point approximation. The methodology is extremely simple to implement and at least in all the examples studied in this work is very accurate. (c) 1999 The American Physical Society

  12. A dynamical model for multifragmentation

    International Nuclear Information System (INIS)

    Ngo, H.; Ighezou, F.Z.; Ngo, C.

    1999-01-01

    The surface multifragmentation of highly excited (compression and thermal excitation) 208 Pb is investigated with a finite temperature spherical TDHF approximation coupled to a restructured aggregation model. This approach is discussed in terms of the data available from ALADIN collaboration at GSI on gold ion induced reactions on C, Al and Cu targets at 600 MeV/u excitation energy. The calculation showed that the slowest fragments originate in the nuclear volume while the smaller, faster fragments are emitted from surface

  13. Boltzmann-Langevin equation, dynamical instability and multifragmentation

    International Nuclear Information System (INIS)

    Feng-Shou Zhang

    1993-02-01

    By using simulations of the Boltzmann-Langevin equation which incorporates dynamical fluctuations beyond usual transport theories and by coupling it with a coalescence model, we obtain information on multifragmentation in heavy-ion collisions. From a calculation of the 40 Ca + 40 Ca system, we recover some trends of recent multifragmentation data

  14. Characterizing multifragmentation

    International Nuclear Information System (INIS)

    Campi, X.; Krivine, H.

    1994-01-01

    Various methods to characterize the fragment size distributions in nuclear multifragmentation are discussed. The goal is to find the best signals of a phase transition associated to multifragmentation. The concepts of scaling and critical exponents are reviewed and the possibility to determine them in finite nuclei is examined. The fluctuations of the fragment size distribution and a possible signal of intermittency are also discussed. (author). 29 refs., 4 figs., 1 tab

  15. Critical behavior in a microcanonical multifragmentation model

    International Nuclear Information System (INIS)

    Raduta, A.H.; Raduta, A.R.; Chomaz, Ph.; Raduta, A.H.; Raduta, A.R.; Gulminelli, F.

    2001-01-01

    Scaling properties of the fragment size distributions are studied in a microcanonical multifragmentation model. A new method based on the global quality of the scaling function is presented. Scaling is not washed out by the long range Coulomb interaction nor by secondary decays for a wide range of source masses, densities and deposited energies. However, the influence of these factors on precise value of the critical exponents as well as the finite size corrections to scaling are shown to be important and to affect the possible determination of a specific universality class. (authors)

  16. Mechanical breakdown in the nuclear multifragmentation phenomena. Thermodynamic analysis

    International Nuclear Information System (INIS)

    Bulavin, L.A.; Cherevko, K.V.; Sysoev, V.M.

    2012-01-01

    Based on a similarity of the Van der Waals and nucleon-nucleon interaction the known thermodynamic relations for ordinary liquids are used to analyze the possible decay channels in the proton induced nuclear multifragmentation phenomena. The main features of the different phase trajectories in the P-V plane are compared with the experimental data on multifragmentation. It allowed choosing the phase trajectories with the correct qualitative picture of the phenomena. Based on the thermodynamic analysis of the proton-induced multifragmentation phenomena the most appropriate decay channel corresponding to the realistic phase trajectory is chosen. Macroscopic analysis of the suggested decay channel is done in order to check the possibility of the mechanical breakdown of the heated system. Based on a simple thermodynamic model preliminary quantitative calculations of corresponding macroscopic parameters (energy, pressure) are done and therefore the model verification on macroscopic level is held. It is shown that on macroscopic level the chosen decay channel through the mechanical breakdown meets the necessary conditions for describing the proton-induced multifragmentation phenomena

  17. Multifragmentation in peripheral nucleus-nucleus collisions

    International Nuclear Information System (INIS)

    Trautmann, W.; Adloff, J.C.; Bouissou, P.; Hubele, J.; Imme, G.; Iori, I.; Kreutz, P.; Leray, S.; Lindenstruth, V.; Liu, Z.; Lynen, U.; Meijer, R.J.; Milkau, U.; Moroni, A.; Mueller, W.F.J.; Ngo, C.; Ogilvie, C.A.; Pochodzalla, J.; Raciti, G.; Rudolf, G.; Schuettauf, A.; Stuttge, L.

    1993-10-01

    The complete fragmentation of highly excited nuclear systems into fragments of intermediate mass is observed in heavy-ion reactions at relativistic bombarding energies in the range of several hundreds of MeV per nucleon. Similar features are found for peripheral collisions between heavy nuclei and for more central collisions between a heavy and a light nucleus. The partition space explored in multifragment decays is well described by the statistical multifragmentation models. The expansion before breakup is confirmed by the analysis of the measured fragment energies of ternary events in their own rest frame. Collective radial flow is confined to rather small values in these peripheral-type reactions. Many conceptually different models seem to be capable of reproducing the charge correlations measured for the multifragment decays. (orig.)

  18. Surface multifragmentation investigated with a finite temperature spherical TDHF model

    International Nuclear Information System (INIS)

    Ngo, H.; Ighezou, F.Z.; Paula, L. De

    1992-01-01

    A model for multifragmentation caused by heavy ion collision is developed. The initial state is a hot and compressed spherical nucleus in thermal equilibrium. The dynamical evolution of this nucleus is studied. The nuclear density of the system is calculated with mean field approximation. It is shown that, in some cases, the surface of the nucleus breaks up before its volume. (K.A.) 8 refs.; 1 fig

  19. Study of the experimental data of multifragmentation of gold and krypton nuclei on interactions with photoemulsion nuclei at high energies

    International Nuclear Information System (INIS)

    Saleh, Z.A.; Abdel-Hafez, A.

    2002-01-01

    Results from EMU-01/12 collaboration for the experimental data on multifragmentation of gold residual nuclei created in the interactions with photoemulsion nuclei at the energy of 10.7 GeV/nucleon are presented together with the experimental data on multifragmentation of krypton created on the interactions with photoemulsion nuclei at energy of 0.9 GeV/nucleon. The data are analyzed in the frame of the statistical model of multifragmentation. It is obvious that there are two regimes for nuclear multifragmentation: the former is when less than one-half of nucleons of projectile nucleus are knocked out, the later is when more than one-half of nucleons are knocked out. Residual nuclei with masses close to each other created at different reactions are fragmented practically simultaneously when more than one-half of nucleons of original nuclei are knocked out. These results give an indication that projectiles other than Gold and Krypton may give the same characterization on interaction with emulsion nuclei at high energies

  20. Dynamic aspects of the nuclear decay: from the fission to the multifragmentation

    International Nuclear Information System (INIS)

    Gruyer, Diego

    2014-01-01

    In this work we study the evolution and nature of reaction and decay mechanisms of hot nuclei produced in heavy ion collisions from E = 8 to 25 MeV/A measured with INDRA. In central Xe+Sn collisions from E = 8 to 25 MeV/A, three-fragment events present a significant cross section without the underlying production mechanism being clearly established. We have shown that fragments arise from two successive binary splittings. The time interval between these two splittings decreases with increasing incident energy, becoming compatible with a simultaneous three-body break-up above E = 20 MeV/A, which was interpreted as the signature of the onset of multifragmentation. Then we have investigated the nature of the multifragmentation process. A statistical analysis of the largest fragment charge (Zmax) distribution produced in central Xe+Sn collisions at E = 25-50 MeV/A allowed us to establish that multifragmentation is a dynamical aggregation process. It also demonstrates the effects of collective radial expansion on multifragmentation partitions through the link between the timescale of the process and the shape of the Zmax distribution. The comparison of fragmentation patterns of comparable size systems produced in symmetric (Xe+Sn) and asymmetric (Ta+Zn) central collisions, which are supposed to follow different trajectories in the nuclear phase diagram, confirm the link between collective radial expansion and fragment partitions in multifragmentation. (author) [fr

  1. Multifragmentation of hot nuclei

    International Nuclear Information System (INIS)

    Tamain, B.

    1990-10-01

    It is difficult to deposit a large amount (∼ 1 Gev) of excitation energy into a nucleus. And if one wants to deposit large excitation energy values, the best way consists of shooting a given target nucleus with several nucleons, which can be achieved by using intermediate energy (10-100 MeV/nucleon) heavy ions. Such very excited objects were named hot nuclei. The study of hot nuclei has been undertaken only for 7 years because intermediate energy heavy ion facilities were not available before. The game is then to determine the decay properties of such nuclei, their limits of existence. Their study is connected with general properties of nuclear matter: namely its equation of state. Of special interest, is the onset of a new decay mechanism: multifragmentation, which is the non-sequential disassembly of a hot nucleus into several light nuclei (often called intermediate-mass fragments or IMF) or particles. This paper, shows how this mechanism can reflect fundamental properties of nuclear matter, but also how its experimental signature is difficult to establish. Multifragmentation has also been studied by using very energetic projectiles (protons and heavy ions) in the relativistic or ultra-relativistic region. The multifragmentation question of hot nuclei is far from being solved. One knows that IMF production increases when the excitation energy brought into a system is strongly increased, but very little is known about the mechanisms involved and a clear onset for multifragmentation is not established

  2. Thermal multifragmentation of hot nuclei and liquid-fog phase transition

    International Nuclear Information System (INIS)

    Karnaukhov, V.A.; Avdeev, S.P.; Duginova, E.V.

    2002-01-01

    Multiple emission of intermediate-mass fragments in the collisions of protons (up to 8.1 GeV), 4 He(4 and 14.6GeV) and 12 C(22.4 GeV) on Au has been studied with the 4π-setup FASA. In all cases thermal multifragmentation of the hot and diluted target spectator takes place. The fragment multiplicity and charge distributions are well described by the combined model including the modified intranuclear cascade followed by the statistical multibody decay of the hot system. IMF-IMF correlation study supports this picture giving very short time scale of the process (τ≤70fm/c). This decay process can be interpreted as the first order nuclear liquid-fog phase transition inside the spinodal region. The evolution of the mechanism of thermal multifragmentation with increasing projectile mass was investigated. The onset of the radial collective flow was observed for heavier projectiles. The analysis reveals the information on the fragment space distribution inside the break-up volume: heavier IMF are formed predominantly in the interior of the fragmenting nucleus possibly due to the density gradient

  3. Thermal Multifragmentation of Hot Nuclei and Liquid-Fog Phase Transition

    CERN Document Server

    Karnaukhov, V A; Duginova, E V; Petrov, L A; Rodionov, V K; Oeschler, H; Budzanowski, A; Karcz, W; Janicki, M; Bochkarev, O V; Kuzmin, E A; Chulkov, L V; Norbeck, E; Botvina, A S

    2002-01-01

    Multiple emission of intermediate-mass fragments in the collisions of protons (up to 8.1 GeV), ^{4}He (4 and 14.6 GeV) and ^{12}C (22.4 GeV) on Au has been studied with the 4\\pi-setup FASA. In all cases thermal multifragmentation of the hot and diluted target spectator takes place. The fragment multiplicity and charge distributions are well described by the combined model including the modified intranuclear cascade followed by the statistical multibody decay of the hot system. IMF-IMF correlation study supports this picture giving very short time scale of the process (\\tau\\le 70 fm/c). This decay process can be interpreted as the first order nuclear liquid-fog phase transition inside the spinodal region. The evolution of the mechanism of thermal multifragmentation with increasing projectile mass was investigated. The onset of the radial collective flow was observed for heavier projectiles. The analysis reveals the information on the fragment space distribution ins! ide the break-up volume: heavier IMF are for...

  4. Iso-scaling in a microcanonical multifragmentation model

    International Nuclear Information System (INIS)

    Raduta, R.; Raduta, H.

    2003-01-01

    A microcanonical multifragmentation model is used to investigate iso-scaling over a broad range of excitation energies, for several values of freeze-out volume and equilibrated sources with masses between 40 and 200 in both primary and asymptotic stages of the decay. It was found that the values of the slope parameters α and β depend on the size and excitation energy of the source and are affected by the secondary decay of primary fragments. It was evidenced that iso-scaling is affected by finite size effects. The evolution of the differences of neutron and proton chemical potentials corresponding to two equilibrated nuclear sources having the same size and different isospin values with temperature and freeze-out volume is presented. (authors)

  5. Study of multifragmentation: contribution of reduced velocity correlations between particles and fragments; Etude de la multifragmentation: apport des correlations en vitesse reduite entre particules et fragments

    Energy Technology Data Exchange (ETDEWEB)

    Le Fevre, A. [Paris-7 Univ., 75 (France)

    1997-05-14

    This work is focused on the study of fragment and light particle production mechanisms in the multifragmentation process of hot nuclei, which are formed in the central collisions of Xe+Sn at 50 MeV/u. The experiment has been performed with the INDRA multidetector. The central collision events, selected via the flow angle variable, exhibit the presence of a heavy (Z=90) and highly excited (E{sup *}=12.5 MeV/u) isotropic emission source. The comparison of the data with a statistical multifragmentation model (MMMC) and a dynamical model (BNV) makes us conclude that the multifragmentation can only be explained in the frame of a relatively cold process, around 6 MeV/u of thermal excitation energy, preceded by a primary emission stage of the expanding source, during which nearly one third of the excitation energy is dissipated. In addition, it appears that the fragment energy spectra are not explained by a purely thermal process, and that one has to put forward an expansion collective motion, of 2 MeV/u of energy, following the compression of the compound system. In order to precise the existence of a two-step particle emission (primary and secondary), we have developed and applied an original method of reduced velocity correlations between particles and fragments. It has allowed us to underline two distinct origins for the particle production: one corresponding to secondary emissions, coming from the fragments, and the other one, associated with emissions which occur prior to the fragment production. At last, it has allowed us, also to bring out a hierarchy in the emission time in the decay process, with respect to the particle type. (author) 90 refs.

  6. Source size and time dependence of multifragmentation induced by GeV 3He beams

    International Nuclear Information System (INIS)

    Wang, G.; Kwiatkowski, K.; Bracken, D.S.; Renshaw Foxford, E.; Hsi, W.; Morley, K.B.; Viola, V.E.; Yoder, N.R.; Volant, C.; Legrain, R.; Pollacco, E.C.; Korteling, R.G.; Botvina, A.; Brzychczyk, J.; Breuer, H.

    1999-01-01

    To investigate the source size and time dependence of multifragmentation reactions, small- and large-angle relative velocity correlations between coincident complex fragments have been measured for the 1.8 - 4.8 GeV 3 He+ nat Ag, 197 Au systems. The results support an evolutionary scenario for the fragment emission process in which lighter IMFs (Z approx-lt 6) are emitted from a hot, more dense source prior to breakup of an expanded residue. For the most highly excited residues, for which there is a significant yield of fragments with very soft energy spectra (E/A≤3 MeV), comparisons with an N-body simulation suggest a breakup time of τ∼50 fm/c for the expanded residue. Comparison of these data with both the evolutionary expanding emitting source model and the Copenhagen statistical multifragmentation model shows good agreement for heavier IMF close-quote s formed in the final breakup stage, but only the evolutionary model is successful in accounting for the lighter IMFs. copyright 1999 The American Physical Society

  7. Statistical multifragmentation of non-spherical expanding sources in central heavy-ion collisions

    Energy Technology Data Exchange (ETDEWEB)

    Le Fevre, A. E-mail: a.lefevre@gsi.de; Ploszajczak, M.; Toneev, V.D.; Auger, G.; Begemann-Blaich, M.L.; Bellaize, N.; Bittiger, R.; Bocage, F.; Borderie, B.; Bougault, R.; Bouriquet, B.; Charvet, J.L.; Chbihi, A.; Dayras, R.; Durand, D.; Frankland, J.D.; Galichet, E.; Gourio, D.; Guinet, D.; Hudan, S.; Hurst, B.; Lautesse, P.; Lavaud, F.; Legrain, R.; Lopez, O.; Lukasik, J.; Lynen, U.; Mueller, W.F.J.; Nalpas, L.; Orth, H.; Plagnol, E.; Rosato, E.; Saija, A.; Schwarz, C.; Sfienti, C.; Tamain, B.; Trautmann, W.; Trzcinski, A.; Turzo, K.; Vient, E.; Vigilante, M.; Volant, C.; Zwieglinski, B.; Botvina, A.S

    2004-04-19

    We study the anisotropy effects measured with INDRA at GSI in central collisions of {sup 129}Xe+{sup nat}Sn at 50 A MeV and {sup 197}Au+{sup 197}Au at 60, 80, 100 A MeV incident energy. The microcanonical multifragmentation model with non-spherical sources is used to simulate an incomplete shape relaxation of the multifragmenting system. This model is employed to interpret observed anisotropic distributions in the fragment size and mean kinetic energy. The data can be well reproduced if an expanding prolate source aligned along the beam direction is assumed. An either non-Hubblean or non-isotropic radial expansion is required to describe the fragment kinetic energies and their anisotropy. The qualitative similarity of the results for the studied reactions suggests that the concept of a longitudinally elongated freeze-out configuration is generally applicable for central collisions of heavy systems. The deformation decreases slightly with increasing beam energy.

  8. Dynamical origin of nuclear multifragmentation; Origine dynamique de la multifragmentation nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Tirel, O. [Caen Univ., 14 (France)

    1998-12-01

    The study of the peripheral and semi-peripheral collisions in the reaction Xe+Sn at 50 A.MeV has lead to the identification of the role of out-of-equilibrium aspects in the production of intermediate mass fragments (IMF). First, it is shown that the experimental observations are incompatible with a model in which a very hot layer of matter is primarily responsible for the production of IMF at intermediate velocity. Next, the same data are compared with a calculation using the quantum molecular dynamics approach (QMD). the quality of agreement with the predictions of this model allows to draw conclusions concerning the production mechanism of fragments. The IMF originate from region that is intermediate between the projectile and the target. It is furthermore shown that this region is not in thermal equilibrium, that the fragments are pre-formed and that their velocity and composition strongly depend on the initial conditions of the reaction. The quasi-projectile and the quasi-target, on the other hand, are only mildly influenced by the collision and their excitation energies are estimated to be below the limit at which multifragmentation will take place. In parallel, an analysis is carried out which correlates he multiplicity of the IMF with the violence of the collision. This shows that a proper analysis of a process as complex as nuclear multifragmentation must simultaneously involve kinetic variables (velocity, energy,...) as well as static ones (multiplicity. charge distribution,...). (author)

  9. Dynamical effects in multifragmentation at intermediate energies

    Energy Technology Data Exchange (ETDEWEB)

    Colin, J.; Cussol, D.; Normand, J. [Caen Univ., Lab. de Physique Corpusculaire, IN2P3-CNRS/ENSICAEN, 14 (France)] [and others

    2003-04-01

    The fragmentation of the quasi-projectile is studied with the INDRA multidetector for different colliding systems and incident energies in the Fermi energy range. Different experimental observations show that a large part of the fragmentation is not compatible with the statistical fragmentation of a fully equilibrated nucleus. The study of internal correlations is a powerful tool, especially to evidence entrance channel effects. These effects have to be included in the theoretical descriptions of nuclear multifragmentation. (authors)

  10. Looking for bimodal distributions in multi-fragmentation reactions

    International Nuclear Information System (INIS)

    Gulminelli, F.

    2007-01-01

    The presence of a phase transition in a finite system can be deduced, together with its order, from the form of the distribution of the order parameter. This issue has been extensively studied in multifragmentation experiments, with results that do not appear fully consistent. In this paper we discuss the effect of the statistical ensemble or sorting conditions on the form of fragment distributions, and propose a new method, which can be easily implemented experimentally, to discriminate between different fragmentation scenarios. This method, based on a re-weighting of the measured distribution to account for the experimental constraints linked to the energy deposit, is tested on different simple models, and appears to provide a powerful discrimination. (author)

  11. Multifragmentation and evaporation: two competing processes in intermediate energy nuclear collisions

    International Nuclear Information System (INIS)

    Campi, X.; Desbois, J.; Lipparini, E.

    1984-05-01

    We study the conditions at which mutiple break up of nuclei occurs during a collision. A minimal temperature of about 5MeV seems to be necessary to produce multifragmentation. The average number of fragments produced is correlated with the average number of primary nucleon-nucleon collisions. Based on these ideas a simple model of evaporation-multifragmentation reactions is developed, which accounts for most of the existing data for protons and heavy ions induced reactions

  12. Fluctuations in projectile fragment distributions from 1 GeV/nucleon Au + C multifragmentation

    International Nuclear Information System (INIS)

    Elliott, J.B.; Gilkes, M.L.; Hauger, A.; Hirsch, A.S.

    1993-01-01

    Fluctuations in cluster distributions play an important role in distinguishing critical and non-critical cluster forming phenomena. The magnitude of the reduced variance (γ 2 ) of a cluster distribution is a direct measure of the size of its fluctuations. Preliminary examinations of γ 2 are made for cluster distributions from 1 GeV/nucleon Au+C data obtained in the EOS experiment at the Bevalac. Values of γ 2 are compared to those from percolation and statistical multifragmentation models

  13. Study of multifragmentation: contribution of reduced velocity correlations between particles and fragments

    International Nuclear Information System (INIS)

    Le Fevre, A.

    1997-01-01

    This work is focused on the study of fragment and light particle production mechanisms in the multifragmentation process of hot nuclei, which are formed in the central collisions of Xe+Sn at 50 MeV/u. The experiment has been performed with the INDRA multidetector. The central collision events, selected via the flow angle variable, exhibit the presence of a heavy (Z=90) and highly excited (E * =12.5 MeV/u) isotropic emission source. The comparison of the data with a statistical multifragmentation model (MMMC) and a dynamical model (BNV) makes us conclude that the multifragmentation can only be explained in the frame of a relatively cold process, around 6 MeV/u of thermal excitation energy, preceded by a primary emission stage of the expanding source, during which nearly one third of the excitation energy is dissipated. In addition, it appears that the fragment energy spectra are not explained by a purely thermal process, and that one has to put forward an expansion collective motion, of 2 MeV/u of energy, following the compression of the compound system. In order to precise the existence of a two-step particle emission (primary and secondary), we have developed and applied an original method of reduced velocity correlations between particles and fragments. It has allowed us to underline two distinct origins for the particle production: one corresponding to secondary emissions, coming from the fragments, and the other one, associated with emissions which occur prior to the fragment production. At last, it has allowed us, also to bring out a hierarchy in the emission time in the decay process, with respect to the particle type. (author)

  14. Break-up stage restoration in multifragmentation reactions

    Energy Technology Data Exchange (ETDEWEB)

    Raduta, Ad.R. [Institut de Physique Nucleaire, IN2P3-CNRS, F-91406 Orsay cedex (France)]|[NIPNE, Bucharest-Magurele, POB-MG 6 (Romania); Bonnet, E.; Borderie, B.; Le Neindre, N.; Rivet, M.F. [Institut de Physique Nucleaire, IN2P3-CNRS, F-91406 Orsay cedex (France); Piantelli, S. [Dip. di Fisica e Sezione INFN, Universita di Firenze, I-50019 Sesto Fiorentino, Fi (Italy)

    2007-02-15

    In the case of Xe+Sn at 32 MeV/nucleon multifragmentation reaction break-up fragments are built-up from the experimentally detected ones using evaluations of light particle evaporation multiplicities which thus settle fragment internal excitation. Freeze-out characteristics are extracted from experimental kinetic energy spectra under the assumption of full decoupling between fragment formation and energy dissipated in different degrees of freedom. Thermal kinetic energy is determined uniquely while for freeze-out volume - collective energy a multiple solution is obtained. Coherence between the solutions of the break-up restoration algorithm and the predictions of a multifragmentation model with identical definition of primary fragments is regarded as a way to select the true value. The broad kinetic energy spectrum of {sup 3}He is consistent with break-up genesis of this isotope. (authors)

  15. Break-up stage restoration in multifragmentation reactions

    International Nuclear Information System (INIS)

    Raduta, Ad.R.; Bonnet, E.; Borderie, B.; Le Neindre, N.; Rivet, M.F.; Piantelli, S.

    2007-02-01

    In the case of Xe+Sn at 32 MeV/nucleon multifragmentation reaction break-up fragments are built-up from the experimentally detected ones using evaluations of light particle evaporation multiplicities which thus settle fragment internal excitation. Freeze-out characteristics are extracted from experimental kinetic energy spectra under the assumption of full decoupling between fragment formation and energy dissipated in different degrees of freedom. Thermal kinetic energy is determined uniquely while for freeze-out volume - collective energy a multiple solution is obtained. Coherence between the solutions of the break-up restoration algorithm and the predictions of a multifragmentation model with identical definition of primary fragments is regarded as a way to select the true value. The broad kinetic energy spectrum of 3 He is consistent with break-up genesis of this isotope. (authors)

  16. Aggregation process, application to nuclear multifragmentation

    International Nuclear Information System (INIS)

    Garcia, Jean-Baptiste

    1995-01-01

    It is depicted an aggregation model (applied to nuclear multifragmentation) which I have elaborated and validated. This model contains an aggregation procedure, allowing one to determine the aggregation state of a given system. It takes into account spatial and kinetic nucleonic information, as well as in-medium effects. It is made of several energetic linkage criterions, all based on a single quantity: the energy of a system computed in its center of mass frame. This procedure has been applied to nuclear physics, assuming nucleus as a mix of two Fermi gas, interacting via the Yukawa potential (plus Coulomb in between protons) and obeying to a classical exclusion principle. The general trends of the model match with those of nuclear physics. Moreover, two comparisons between the model and nuclear multifragmentation experiments (ALADIN, then FOPI) exhibit nice agreements. The FOPI one, shows that fragments are bound to be formed at the beginning of the expansion phase (Au + Au at 150 MeV/nuc, for central collisions). This work ends with a study of the main ingredients included in the model. It reveals that in-medium effects, exclusion principle as well as the shape of the potential have a non negligible influence on the studied nuclear aggregation process. (author) [fr

  17. Intermittency in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Tucholski, A.

    1990-07-01

    Fluctuations of the fragment size distribution in a percolation model and in nuclear multifragmentation following the breakup of high energy nuclei in the nuclear emulsion are studied using the method of scaled factorial moments. An intermittent patern of fluctuations is found in the data as well as in the percolation lattice calculation. This is a consequence of both a self-similarity in the fragment size distribution and a random character for the scaling law. These fluctuations are in general well-described by percolation model. The multifractal dimensions are calculated and their relevance to the study of possible critical behaviour is pointed out. (orig.)

  18. Light-ion-induced multifragmentation. A fast, evolutionary process

    International Nuclear Information System (INIS)

    Viola, V.E.; Bracken, D.S.; Foxford, E.R.; Ginger, D.; Kwiatkowski, K.; Morley, K.B.; Hsi, W.C.; Wang, G.; Korteling, R.G.; Legrain, R.

    1996-09-01

    GeV light-ion-induced reactions offer a unique tool for preparing hot, dilute nuclear matter. Time evolution of nuclear multifragmentation in 3 He + nat Ag and 3 He + 197 Au reactions are investigated. Fragment-fragment correlations are studied in order to gain information on multifragmentation mechanism. (K.A.)

  19. Statistical multifragmentation of non-spherical expanding sources in central heavy-ion collisions

    International Nuclear Information System (INIS)

    Le Fevre, A.; Ploszajczak, M.; Toneev, V.D.

    2003-10-01

    We study the anisotropy effects measured with INDRA at GSI in central collisions of 129 Xe+ nat Sn at 50 A MeV and 197 Au+ 197 Au at 60, 80, 100 A MeV incident energy. The microcanonical multifragmentation model with non-spherical sources is used to simulate an incomplete shape relaxation of the multifragmenting system. This model is employed to interpret observed anisotropic distributions in the fragment size and mean kinetic energy. The data can be well reproduced if an expanding prolate source aligned along the beam direction is assumed. In the model, the anisotropy is the result of correlations between the charge of a fragment and its location in the freeze-out configuration, created by the mutual Coulomb interactions inside the non-spherical source. An either non-Hubblean or non-isotropic radial expansion is required to describe the fragment kinetic energies and their anisotropy. The qualitative similarity of the results for the studied reactions suggests that the concept of a longitudinally elongated freeze-out configuration is generally applicable for central collisions of heavy systems. The deformation decreases slightly with increasing beam energy. (orig.)

  20. Multifragmentation in central Kr+Au collisions at 60 MeV/u

    International Nuclear Information System (INIS)

    Lopez, O.; Aboufirassi, M.; Bougault, R.; Brou, R.; Colin, J.; Durand, D.; Genoux-Lubain, A.; Laville, J.L.; Lecolley, J.F.; Lefebvres, F.; Le Brun, C.; Louvel, M.; Mahi, M.; Paulot, C.; Steckmeyer, J.C.; Tamain, B.; Horn, D.

    1994-01-01

    We studied multifragmentation reaction kinetics, and tried to analyse whether the disassembly process is simultaneous or sequential. We performed central Kr+Au collisions at 60 MeV/u. After a careful selection of central events by requiring identical parallel and transverse center-of-mass velocity distribution, events have been compared with two computer simulations: standard statistical prescriptions with sequential emission of fragments, and Lopez-Randrup formalism assuming a simultaneous emission of fragments. A good agreement with simultaneous model leads us to the conclusion that booth simulations (prompt and simultaneous fragment emission) match the disassembly process. (D.L.). 14 refs., 2 figs

  1. Searching for multifragmentation in light asymmetric systems 93Nb+24Mg and 93Nb+27Al at 30 A.MeV

    International Nuclear Information System (INIS)

    Manduci, Loredana

    2004-01-01

    The present work analyses the inverse kinematics reactions 93 Nb+ 27 Al and 93 Nb+ 24 Mg at 30 A.MeV. The reaction events are sorted as a function of the violence of the collision and experimental sources are reconstructed. Their decay is studied by two statistical model simulations: Gemini, for binary sequential decay and SMM for prompt multifragmentation. Both models show a reasonable agreement with the experimental observables starting from a backtracking simulated source decay which has charge and excitation energy distributions comparable to the experimental ones. This result was expected because we are still under the multifragmentation onset. However both models disagree in the fragments production rate. This states the question on the different starting point for the fragment decay probability widths calculations altogether to the idea that taking into account the nuclear dynamics between the saddle-point and the scission point would improve our results. (author) [fr

  2. Dynamics of the multifragmentation

    International Nuclear Information System (INIS)

    Lindenstruth, V.

    1993-05-01

    In this thesis the dynamics of the multifragmentation of gold projectiles is studied in inverse kinematics at an incident energy of 600A MeV. The essential question, namely how far the system is equilibrated in the breakup, can now be answered by the analysis of the angular distributions of the fragments in the source system. No hints for a nonequilibrium were found. It could be shown that the sum of the charges of the projectile fragments Z bound , in which protons as most probable evaporation products are excluded, is especially suited to classify these nuclear reactions. The study of the c. m. velocities of the fragments resulted, that the different breakup channels for a given Z bound represent competing processes. An especially interesting aspect of the multifragmentation is the breakup configuration and its time scale. In order to be able to make for this a statement, Coulomb-trajectory calculations for fragments from triple breakups were performed. Two scenarios were taken up: a sequential breakup, in which two timely separated breakups occur, and a simultaneous breakup, in which the three fragments are randomly put into a given volume. In both calculations the experimental charge and mass distributions were applied in order to exclude uncertainties in the event generation. For both model variants parameter sets could be found, which describe the experimental results astonishingly well. (orig./HSI) [de

  3. Studying multifragmentation dynamics at intermediate energies using two-fragment correlations

    International Nuclear Information System (INIS)

    Sangster, T.C.; Britt, H.C.; Namboodiri, M.N.

    1993-01-01

    One of the most challenging topics in Nuclear Physics is the multifragmentation at moderate excitation energies in large nuclear systems. Although the idea that multifragmentation is analogous to a liquid-gas like phase transition is not new, it has only been recently that highly exclusive experimental measurements have been coupled with sophisticated theoretical models like QMD and BUU/VUU to explore reaction dynamics and the process of fragment formation. Indeed, much of what is known about multifragmentation has resulted from the study of complex correlations present in both the experimental data and theoretical calculations. One of the most crucial questions in the ongoing debate concerning the liquid-gas analogy is the differentiation between simultaneous and sequential fragment emission. Clearly, the phase transition analogy breaks down if fragments are emitted sequentially as in an evaporative process. There have been a number of two-fragment correlation results published recently (including those presented in this paper) which attempt to put limits on the emission timescale using three-body Coulomb trajectory calculations with explicit emission times for sequential decays from a fixed source density. These results have been generally consistent and indicate that intermediate mass fragment (IMF) emission is nearly simultaneous in medium energy heavy ion collisions. Only very recently have calculations been performed which approach this question from the other extreme: simultaneous emission from a variable density source. When considered together, these results argue favorably for a simultaneous multifragmentation. In this paper the authors present comprehensive results on two-fragment correlations for heavy systems at intermediate energies

  4. Mass and Isospin Effects in Multifragmentation

    International Nuclear Information System (INIS)

    Sfienti, C.; Adrich, P.; Aumann, T.

    2005-01-01

    A systematic study of isospin effects in the breakup of projectile spectators at relativistic energies has been performed with the ALADiN spectrometer at the GSI laboratory (Darmstadt). Four different projectiles 197 Au, 124 La, 124 Sn and 107 Sn, all with an incident energy of 600 AMeV, have been used, thus allowing a study of various combinations of masses and N/Z ratios in the entrance channel. The measurement of the momentum vector and of the charge of all projectile fragments with Z > 1 entering the acceptance of the ALADiN magnet has been performed with the high efficiency and resolution achieved with the TP-MUSIC IV detector. The Rise and Fall behavior of the mean multiplicity of IMFs as a function of Z bound and its dependence on the isotopic composition has been determined for the studied systems. Other observables investigated so far include mean N/Z values of the emitted light fragments and neutron multiplicities. Qualitative agreement has been obtained between the observed gross properties and the predictions of the Statistical Multifragmentation Model

  5. Multifragmentation of gold nuclei by light relativistic ions - thermal break-up versus dynamic disintegration

    International Nuclear Information System (INIS)

    Avdeev, S.P.; Karnaukhov, V.A.; Petrov, L.A.

    2000-01-01

    Multiple emission of intermediate-mass fragments has been studied for the collisions of p, 4 He and 12 C on Au with the 4π setup FASA. The mean IMF multiplicities saturate at a value of around 2 for incident energies above 6 GeV. An attempt to describe the observed IMF multiplicities in the two-stage scenario, a fast cascade followed by a statistical multifragmentation, fails. Agreement with the measured IMF multiplicities is obtained by introducing an intermediate expansion phase and modifying empirically the excitation energies and masses of the remnants. The angular distributions and energy spectra from the p-induced collisions are in agreement with the scenario of 'thermal' multifragmentation of a hot and expanded target-spectator. In the case of 12 C+Au (22.4 GeV) and 4 He (14.6 GeV) +Au collisions, deviations from a pure thermal break-up are seen in the emitted-fragment energy spectra, which are harder than those both from model calculations and from the measured ones for p-induced collisions. This difference is attributed to a collective flow with the expansion velocity at the surface of about 0.1 s (for 12 C + Au collisions)

  6. Multifragmentation of gold nuclei interacting with photoemulsion nuclei at an energy of 10.7 GeV per projectile nucleon

    International Nuclear Information System (INIS)

    Gulamov, K.G.; Navotny, V.Sh.; Uzhinskii, V.V.

    1999-01-01

    Experimental data on the distributions of fragments with respect to the bound charge (Z bound , Z b3 ) and with respect to the multiplicities and on their correlations are presented. These data are compared with analogous data at 600 MeV per projectile nucleon that were obtained at the ALADIN facility. It has been shown that the processes of gold-nucleus multifragmentation at intermediate and high energies have some common features. At the same time, the multiplicity of medium-mass fragments becomes somewhat less at high energies. Data presented in this study are analyzed within the framework combining the statistical model of nuclear multifragmentation with the Regge model of the breakup of nuclei. This combined model has been shown to reproduce qualitatively the experimental results under discussion. The most pronounced discrepancies have been observed for the yields of doubly charged fragments. The transverse momenta of fragments have been analyzed as functions of the bound charge Z bound . It has been demonstrated that the model underestimates considerably the transverse momenta of fragments. This is interpreted as evidence for a strong radial flow of spectator fragments

  7. Signature of a spinodal decomposition in the multifragmentation process of very heavy nuclear systems; Signature fossile d'une decomposition spinodale dans la multifragmentation de systemes nucleaires tres lourds

    Energy Technology Data Exchange (ETDEWEB)

    Tabacaru, G

    2000-12-01

    A dynamical description of the multifragmentation process shows that mechanical instabilities are responsible for spinodal decomposition of a nuclear system that spend sufficient time in the low-density region. The Xe + Sn system at 32 MeV/A incident energy, measured with INDRA multidetector, was chosen to prove experimentally this hypothesis. High performance techniques in energy calibration of Silicon detectors and CsI(Tl) scintillators were developed in order to exploit the excellent detection qualities of the multidetector. For the first time, the contribution of {delta} rays to the light output emitted by scintillators was quantitatively derived. Multifragmentation events coming from a system composed by almost all of nucleons of the reaction entrance channel were selected using detection completeness and event shape criteria. The dynamical model BoB realistically simulates the spinodal instabilities. This model reproduces all of dynamic and static observables. More exclusive comparisons were made to constrain again the model. Reduced velocity correlation functions were studied and gave information about the topology of the fragments at freeze-out. Charge correlation of the fragments showed that a small proportion of events (0.1 %) emits equal-sized fragments. This was interpreted as a fossil signature of the spinodal decomposition in a finite system. Indirectly, this is a proof of a first order phase transition associated to the multifragmentation of hot nuclei. (author)

  8. Microcanonical simulation of nuclear multifragmentation

    International Nuclear Information System (INIS)

    Randrup, J.; Koonin, S.E.

    1987-01-01

    We discuss the formal basis for the theoretical treatment of nuclear multifragmentation within a microcanonical framework. The important role played by highly excited nuclear states and the interfragment forces is illustrated. The requirement of detailed balance is especially discussed and illustrated for the fission-fusion Metropolis moves in configuration space. 13 refs., 2 figs

  9. Percolation and multifragmentation of nuclei

    International Nuclear Information System (INIS)

    Shmakov, S.Yu.; Uzhinskij, V.V.

    1989-01-01

    A method to build the 'cold' nuclei as percolation clusters is suggested. Within the framework of definite assumptions of the character of nucleon-nucleon couplings breaking resulting from the nuclear reactions as description of the multifragmentation process in the hadron-nucleus and nucleus-nucleus reactions at high energies is obtained. 19 refs.; 6 figs

  10. DLNA: a simple one-dimensional dynamical model as a possible interpretation of fragment size distribution in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Lacroix, D.; Dayras, R.

    1996-08-01

    The possibility of interpreting multifragmentation data obtained from heavy-ion collisions at intermediate energies, by a new type of model: the DLNA (Dynamical Limited Nuclear Aggregation) is discussed. This model is connected to a more general class of models presenting Self-Organization Criticality (SOC). It is shown that the fragment size distributions exhibit a power-law dependence comparable to those obtained in second-order phase transition or percolation models. Fluctuations in term of scaled-factorial moments and cumulants are also studied: no signal of intermittency is seen. (K.A.)

  11. Signature of a spinodal decomposition in the multifragmentation process of very heavy nuclear systems

    International Nuclear Information System (INIS)

    Tabacaru, G.

    2000-12-01

    A dynamical description of the multifragmentation process shows that mechanical instabilities are responsible for spinodal decomposition of a nuclear system that spend sufficient time in the low-density region. The Xe + Sn system at 32 MeV/A incident energy, measured with INDRA multidetector, was chosen to prove experimentally this hypothesis. High performance techniques in energy calibration of Silicon detectors and CsI(Tl) scintillators were developed in order to exploit the excellent detection qualities of the multidetector. For the first time, the contribution of δ rays to the light output emitted by scintillators was quantitatively derived. Multifragmentation events coming from a system composed by almost all of nucleons of the reaction entrance channel were selected using detection completeness and event shape criteria. The dynamical model BoB realistically simulates the spinodal instabilities. This model reproduces all of dynamic and static observables. More exclusive comparisons were made to constrain again the model. Reduced velocity correlation functions were studied and gave information about the topology of the fragments at freeze-out. Charge correlation of the fragments showed that a small proportion of events (0.1 %) emits equal-sized fragments. This was interpreted as a fossil signature of the spinodal decomposition in a finite system. Indirectly, this is a proof of a first order phase transition associated to the multifragmentation of hot nuclei. (author)

  12. Intermittency in the particle production and in the nuclear multifragmentation

    International Nuclear Information System (INIS)

    Bozek, P.; Ploszajczak, M.

    1991-01-01

    Intermittency is a manifestation of scale invariance and randomness in physical systems. Intermittency in relativistic heavy-ion collisions and, in particular, the projectile dependence, multiplicity dependence and source-size dependence are discussed in the frame of the model of spatio-temporal intermittency. Moreover, recent theoretical results in intermittency studies of the nuclear multifragmentation are presented. (author) 35 refs., 4 figs., 1 tab

  13. New results on nuclear multifragmentation in nucleon-nucleus and nucleus-nucleus collisions at relativistic energies

    International Nuclear Information System (INIS)

    Besliu, Calin; Jipa, Alexandru; Iliescu, Bogdan; Felea, Daniel

    2002-01-01

    Some new aspects on the multifragmentation processes in nucleus-nucleus and nucleon-nucleus collisions at high energies are discussed in this work. Experimental data obtained in international collaborations (for example, MULTI Collaboration with KEK Tsukuba (Japan) and SKM 200 Collaboration with JINR Dubna (Russia)) are used to discuss new mechanisms in the target nucleus fragmentation. Correlations with stopping power, participant region size and energy density are included. Comparisons of the experimental results with the predictions of a phenomenological geometric model of intermediate mass fragment multiplicity, caloric curves and angular distributions are also presented. These results are used for global description of the multifragmentation processes in nucleon-nucleus and nucleus-nucleus collisions at relativistic energies. The size of the participant region and the average intermediate mass fragments multiplicity are taken into consideration using the free space probability. A few correlations between the deposited energy in the participant region and stability state of the intermediate mass fragments are presented in this work. The importance of the collision geometry in the multifragmentation processes is stressed. The results suggest different time moments for the incident nucleus fragmentation and for the target nucleus fragmentation. The associated entropies are distinct. (authors)

  14. Instability in nuclear dynamics: loss of collectivity and multifragmentation

    International Nuclear Information System (INIS)

    Colonna, M.; Di Toro, M.; Guarnera, A.; Latora, V.; Smerzi, A.

    1995-01-01

    Two limiting cases of nuclear dynamics are analysed: the disappearing of giant collective motions in hot nuclei and the nuclear disassembly in violent heavy ion collisions. For collective vibration build on excited states we get a dramatic increase of the widths of hot Giant Dipole Resonances (GDR). As a consequence of the competition with particle evaporation we get a sharp quenching of giant photon emission. Pre-equilibrium effects on the GDR formation are also accounted for. At high temperature and low density the collective motions can become unstable leading to multifragmentation events in heavy ion collisions. We present a general procedure to identify instability regions and to get informations on the instability point. Some hints towards fully dynamical picture of multi-fragmentation processes are finally suggested. (author)

  15. Multifragmentation in 30 MeV/u [sup 129]Xe induced reactions

    Energy Technology Data Exchange (ETDEWEB)

    Colonna, N. (INFN, Bari (Italy)); Bowman, D.R. (Chalk River Lab. (Canada)); Brambilla, S. (Dipt. di Fisica and INFN, Milan (Italy)); Bruno, M. (Dipt. di Fisica and INFN, Bologna (Italy)); Buttazzo, P. (Dipt. di Fisica and INFN, Trieste (Italy)); Celano, L. (INFN, Bari (Italy)); Chiodini, S. (Dipt. di Fisica and INFN, Milan (Italy)); D' Agostino, M. (Dipt. di Fisica and INFN, Bologna (Italy)); Dinius, J.D. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Ferrero, A. (Dipt. di Fisica and INFN, Milan (Italy)); Gelbke, K. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Glasmacher, T. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Gramegna, F. (INFN, Lab. Nazionali, Legnaro (Italy)); Handzy, D.O. (Michigan State Univ., East Lansing, MI (United States). National Superconducting Cyclotron Lab.); Horn, D. (Chalk River Lab. (Canad

    1994-01-01

    Multifragmentation processes in the 30 MeV/u [sup 129]Xe + [sup nat]Cu and [sup 129]Xe + [sup 197]Au reactions were studied with a low threshold, high-granularity 4[pi] detection system. Fragment velocity distributions have been measured at forward angles as a function of the total charged particle multiplicity. While a two-source pattern is observed for the peripheral collisions, no evidence of a dinuclear emission pattern is found for the most central collisions. Kinematical observables, such as fragment relative velocity, relative angle and reduced velocity correlation functions indicate a fast timescale of the multifragmentation process in these reactions. (orig.)

  16. Dynamical and many-body correlation effects in the kinetic energy spectra of isotopes produced in nuclear multifragmentation

    Science.gov (United States)

    Souza, S. R.; Donangelo, R.; Lynch, W. G.; Tsang, M. B.

    2018-03-01

    The properties of the kinetic energy spectra of light isotopes produced in the breakup of a nuclear source and during the de-excitation of its products are examined. The initial stage, at which the hot fragments are created, is modeled by the statistical multifragmentation model, whereas the Weisskopf-Ewing evaporation treatment is adopted to describe the subsequent fragment de-excitation, as they follow their classical trajectories dictated by the Coulomb repulsion among them. The energy spectra obtained are compared to available experimental data. The influence of the fusion cross section entering into the evaporation treatment is investigated and its influence on the qualitative aspects of the energy spectra turns out to be small. Although these aspects can be fairly well described by the model, the underlying physics associated with the quantitative discrepancies remains to be understood.

  17. Study of very heavy systems by means of INDRA: First evidence for a volume effect in the nuclear multifragmentation process; Etude de systemes tres lourds observes avec INDRA: Premiere mise en evidence d`un effet de volume dans le processus de multifragmentation nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Frankland, John David [Inst. de Physique Nucleaire, Paris-11 Univ., 91 - Orsay (France)

    1998-12-10

    We present a study of Gd+U collisions at 36 AMeV measured with the INDRA multidetector, permitting almost complete detection (over 80%) of all the reaction products. We show that events exist which correspond to the multifragmentation of a single system comprising the majority of the nucleons for a cross-section of 2.6 mbarn, by isolating reactions for which the emitted fragments have lost all memory of the entrance channel. Such reactions correspond to neither the most central collisions nor the most isotropic events (in the fragments` momentum space), and therefore cannot be correctly distinguished from the dominant binary deeply-inelastic collisions using these criteria. An initial comparison of the selected data with a statistical code indicates that fragments are formed in a dilute, compact system, undergoing a self-similar expansion corresponding to a collective energy of between 1 and 1.5 MeV. Comparison with the same type of events observed in Xe+Sn collisions at 32 AMeV reveals the existence of a scaling law for the multifragmentation of systems of different mass at the same excitation energy per nucleon: fragment Z distributions are identical while their multiplicity increases proportionally to the mass of the multi-fragmenting system. This observation is interpreted as an experimental signal that this multifragmentation originates in a bulk instability of low-density nuclear matter (spinodal region). A complete semi-classical microscopic calculation for the two reactions, including the formation and multifragmentation by spinodal decomposition of very heavy, low-density systems, reproduces very well not only the experimental fragment multiplicities and Z distributions but also their mean kinetic energies, as well as the size distributions of the largest fragments. (author) 156 refs., 76 figs., 19 tabs.

  18. Study of very heavy systems by means of INDRA: First evidence for a volume effect in the nuclear multifragmentation process

    International Nuclear Information System (INIS)

    Frankland, John David

    1998-01-01

    We present a study of Gd+U collisions at 36 AMeV measured with the INDRA multidetector, permitting almost complete detection (over 80%) of all the reaction products. We show that events exist which correspond to the multifragmentation of a single system comprising the majority of the nucleons for a cross-section of 2.6 mbarn, by isolating reactions for which the emitted fragments have lost all memory of the entrance channel. Such reactions correspond to neither the most central collisions nor the most isotropic events (in the fragments' momentum space), and therefore cannot be correctly distinguished from the dominant binary deeply-inelastic collisions using these criteria. An initial comparison of the selected data with a statistical code indicates that fragments are formed in a dilute, compact system, undergoing a self-similar expansion corresponding to a collective energy of between 1 and 1.5 MeV. Comparison with the same type of events observed in Xe+Sn collisions at 32 AMeV reveals the existence of a scaling law for the multifragmentation of systems of different mass at the same excitation energy per nucleon: fragment Z distributions are identical while their multiplicity increases proportionally to the mass of the multi-fragmenting system. This observation is interpreted as an experimental signal that this multifragmentation originates in a bulk instability of low-density nuclear matter (spinodal region). A complete semi-classical microscopic calculation for the two reactions, including the formation and multifragmentation by spinodal decomposition of very heavy, low-density systems, reproduces very well not only the experimental fragment multiplicities and Z distributions but also their mean kinetic energies, as well as the size distributions of the largest fragments. (author)

  19. From the multifragmentation to the quark-gluon plasma

    International Nuclear Information System (INIS)

    Boisgard, R.

    1988-01-01

    Multifragmentation and quark de-confinement phenomena are discussed. A scenario for studying the stability of a hot and compressed nuclei is developed. The thermalization of the nuclei generated in heavy ion reactions is described by a pre-equilibrium model. A hydrodynamical approach and a percolation model are applied for determining the stability of the nucleus. The conditions for the nuclear fragmentation process and the cross sections for various systems at different energies are calculated. The experiments were carried out in ultrarelativistic interactions at CERN. The results are different from those obtained at lower energies and in proton reactions. The formation of a quark-gluon plasma is described by means of an aggregation model. The results are similar to those obtained with sophisticated methods. The differences between the macroscopical systems and the studied one (small number of particles) are stressed [fr

  20. From instabilities to multifragmentation

    International Nuclear Information System (INIS)

    Chomaz, P.; Jacquot, B.; Colonna, M.; Guarnera, A.

    1994-01-01

    The main purpose of this article is to show that, in many physical situations, the spinodal decomposition of unstable systems can be correctly described by stochastic mean-field approaches. Such theories predict that the occurrence of spinodal instability leading the multifragmentation of an expended nuclear system, can be signed through the observation of time scales for the fragment formation of the order of 100 fm/c and of typical fragment size around A=20. We will finally discuss the fact that these fragments are formed at finite temperature and so can subsequently decay in flight. Finally, we will give some hints about possible experimental signals of such first order phase transitions. (authors). 12 refs., 5 figs

  1. From instabilities to multifragmentation

    Energy Technology Data Exchange (ETDEWEB)

    Chomaz, P.; Jacquot, B. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Colonna, M.; Guarnera, A. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Istituto Nazionale di Fisica Nucleare, Bologna (Italy)

    1994-12-31

    The main purpose of this article is to show that, in many physical situations, the spinodal decomposition of unstable systems can be correctly described by stochastic mean-field approaches. Such theories predict that the occurrence of spinodal instability leading the multifragmentation of an expended nuclear system, can be signed through the observation of time scales for the fragment formation of the order of 100 fm/c and of typical fragment size around A=20. We will finally discuss the fact that these fragments are formed at finite temperature and so can subsequently decay in flight. Finally, we will give some hints about possible experimental signals of such first order phase transitions. (authors). 12 refs., 5 figs.

  2. Multifragmentation and flow in central collisions of heavy systems

    International Nuclear Information System (INIS)

    Harris, J.W.; Jacak, B.V.; Kampert, K.H.

    1987-05-01

    Experimental results are presented on the production of light particles (A < 5) and intermediate mass fragments (6 < A < 18) over a large solid angle. The reactions 200 MeV/n Au + Au amd Au + Fe were studied to provide information on multifragmentation processes and collective flow. 20 refs., 6 figs

  3. Search for Coulomb-induced multifragmentation in the reaction Gd+U at 36 MeV/u

    International Nuclear Information System (INIS)

    Bacri, C.O.; Squalli, M.; Borderie, B.; Frankland, J.D.; Parlog, M.; Rivet, M.F.; Tassan-Got, L.; Charvet, J.L.

    1996-03-01

    Coulomb-induced multifragmentation is looked for in the study of the system Gd+U at 36 MeV/u with the 4π INDRA detector. Events corresponding to fragment emission from a single source were selected in the system Gd+U using global variables. Different kinematical correlations between the emitted fragments are discussed. Comparisons with simulations are used to extract the shape of the system which decays by multifragmentation, and also to obtain quantitative information about possible expansion effects. (K.A.)

  4. Multifragmentation of a very heavy nuclear system (II): bulk properties and spinodal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Frankland, J.D.; Rivet, M.F.; Borderie, B. [Paris-11 Univ., Inst. de Physique Nucleaire, 91 - Orsay (France)] [and others

    2000-07-01

    The properties of fragments and light charged particles emitted in multifragmentation of single sources formed in central 36 A.MeV Gd+U collisions are reviewed. Most of the products are isotropically distributed in the reaction c.m. Fragment kinetic energies reveal the onset of radial collective energy. A bulk effect is experimentally evidenced from the similarity of the charge distribution with that from the lighter 32 A.MeV Xe+Sn system. Spinodal decomposition of finite nuclear matter exhibits the same property in simulated central collisions for the two systems, and appears therefore as a possible mechanism at the origin of multifragmentation in this incident energy domain. (authors)

  5. Multifragmentation of a very heavy nuclear system (II): bulk properties and spinodal decomposition

    International Nuclear Information System (INIS)

    Frankland, J.D.; Rivet, M.F.; Borderie, B.

    2000-01-01

    The properties of fragments and light charged particles emitted in multifragmentation of single sources formed in central 36 A.MeV Gd+U collisions are reviewed. Most of the products are isotropically distributed in the reaction c.m. Fragment kinetic energies reveal the onset of radial collective energy. A bulk effect is experimentally evidenced from the similarity of the charge distribution with that from the lighter 32 A.MeV Xe+Sn system. Spinodal decomposition of finite nuclear matter exhibits the same property in simulated central collisions for the two systems, and appears therefore as a possible mechanism at the origin of multifragmentation in this incident energy domain. (authors)

  6. Effect of particle fluctuation on isoscaling and isobaric yield ratio of nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, Swagata; Chaudhuri, Gargi

    2013-01-01

    Isoscaling and isobaric yield ratio parameters are compared from canonical and grand canonical ensembles when applied to multifragmentation of finite nuclei. Source dependence of isoscaling parameters and source and isospin dependence of isobaric yield ratio parameters are examined in the framework of the canonical and the grand canonical models. It is found that as the nucleus fragments more, results from both the ensembles converge and observables calculated from the canonical ensemble coincide more with those obtained from the formulae derived using the grand canonical ensemble

  7. Break-up fragment topology in statistical multifragmentation models

    International Nuclear Information System (INIS)

    Raduta, Ad. R.

    2009-01-01

    Break-up fragmentation patterns together with kinetic and configurational energy fluctuations are investigated in the framework of a microcanonical model with fragment degrees of freedom over a broad excitation energy range. As long as fragment partitioning is approximately preserved, energy fluctuations are found to be rather insensitive to both the way in which the freeze-out volume is constrained and the trajectory followed by the system in the excitation-energy-freeze-out volume space. Due to hard-core repulsion, the freeze-out volume is found to be populated nonuniformly, its highly depleted core giving the source a bubble-like structure. The most probable localization of the largest fragments in the freeze-out volume may be inferred experimentally from their kinematic properties, largely dictated by Coulomb repulsion.

  8. Bimodality and negative heat capacity in multifragmentation

    International Nuclear Information System (INIS)

    Tamain, B.; Bougault, R.; Lopez, O.; Pichon, M.

    2003-01-01

    This contribution addresses the question of the possible link between multifragmentation and the liquid-gas phase transition of nuclear matter. Bi-modality seems to be a robust signal of this link in the sense that theoretical calculations indicate that it is preserved even if a sizeable fraction of the available energy has not been shared among all the degrees of freedom. The corresponding measured properties are coherent with what is expected in a liquid-gas phase transition picture. Moreover, bi-modality and negative heat capacity are observed for the same set of events. (authors)

  9. Thermodynamical aspects of the Xe + Sn multifragmentation from 32 to 50 A.MeV; Aspect thermodynamique de la multifragmentation Xe + Sn 32 a 50 A.MeV

    Energy Technology Data Exchange (ETDEWEB)

    Le Neindre, N

    1999-10-01

    Central collisions between heavy-ions at intermediate energies are an ideal tool for studying nuclear matter far away from its saturation state, as well in temperature as in density. The multidetector INDRA has allowed us to select in the reactions Xe + Sn from 32 to 50 A.MeV a set of events which corresponds to the formation of unique source of excited and compressed nuclear matter which subsequently breaks. This unique source selection allows us to neglect the entrance channel and then to study this system under a view of thermodynamical equilibrium. The features of the fragments are compatible with the results of a statistical model which supposes the system's equilibration. However it is necessary, in order to reproduce the characteristics of light particles to take into account for the time dependence of thedeexcitation by considering that part of them could be emitted or escaped during the expansion stage before multifragmentation of the unique source. This particles should explain the high part of the energy distributions observed experimentally for the protons, deutons, tritons and helium 3. Finally we have point out for this kind of heavy-ions collisions, leading to the formation of unique sources, a phase transition for the nuclear matter equivalent to a liquid-gas transition in macroscopic fluids. (authors)

  10. Hot nuclei and search for multifragmentation in medium-energy heavy-ion collisions

    International Nuclear Information System (INIS)

    Doubre, H.

    1988-01-01

    Some recent determinations of the excitation energies and temperatures of composite systems formed in intermediate-energy heavy-ion collisions are described and the issue of a limiting temperature is discussed. Several examples of experimental investigations of an eventual occurrence of a multifragmentation process are also described

  11. Investigation of nuclear multifragmentation using molecular dynamics and restructured aggregation

    International Nuclear Information System (INIS)

    Paula, L. de; Nemeth, J.; Ben-Hao, Sa.; Leray, S.; Ngo, C.; Souza, S.R.; Yu-Ming, Zheng; Paula, L. de; Nemeth, J.; Ben-Hao, Sa.; Yu-Ming, Zheng; Ngo, H.

    1991-01-01

    We study the stability of excited 197 Au nuclei with respect to multifragmentation. For that we use a dynamical simulation based on molecular dynamics and restructured aggregation. A particular attention is paid to check the stability of the ground state nuclei generated by the simulation. Four kinds of excitations are considered: heat, compression, rotation and a geometrical instability created when a projectile drills a hole in a 197 Au nucleus

  12. Multifragmentation in Au + Au collisions studied with AMD-V

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science

    1998-07-01

    AMD-V is an optimum model for calculation of multifragmentation in Au + Au collisions. AMD-V consider anti-symmetry of incident nucleus, target nucleus and fragments, furthermore, it treat the quantum effect to exist many channels in the intermediate and final state. 150 and 250 MeV/nucleon incident energy were used in the experiments. The data of multifragment atom in {sup 197}Au + {sup 197}Au collisions was reproduced by AMD-V calculation using Gognny force, corresponding to the imcompressibility of nuclear substance K = 228 MeV and its mean field depend on momentum. When other interaction (SKG 2 force, corresponding to K = 373 KeV) was used an mean field does not depend on momentum, the calculation results could not reproduce the experimental values, because nucleus and deuteron were estimated too large and {alpha}-particle and intermediate fragments estimated too small. (S.Y.)

  13. Study of the multifragmentation in central collisions of the system: 129Xe + natSn between 32 and 50 MeV/A: measurement of collective expansion energy and of freeze-out volume

    International Nuclear Information System (INIS)

    Salou, S.

    1997-01-01

    The multifragmentation of the nuclear system formed in the central collisions of the Xe+Sn reaction between 32 and 50 MeV/A has been studied with the INDRA detector. A tensorial analyse is used to select central collisions. An important part of the charge (about 85 %) is isotropically emitted. The charge partitions have the characteristics of a simultaneous multiple fragment emission. The shape of the fragment kinetic energy distributions together with the reduced velocity correlation functions indicate that the fragmentation is a simultaneous process that occurs at low density. A comparison between the experimental data and predictions of the statistical multifragmentation model of Copenhagen (SMM) shows that charge partitions agree with the hypothesis of a thermodynamical equilibrium, whereas, kinetic observables are more constraining to the model and difficult to reproduce. However, fragments correlation functions are used in order to extract the freeze-out volume and the collective radial energy. At 50 MeV/A, the freeze-out volume is estimated to be 2.7 times the normal volume. It decreases with incident energy to be nearly twice the normal volume at 32 MeV/S. The collective energy evolves from 0 to 1.3 MeV/A with the bombarding energy. This expansion is not purely thermal and originates probably from a dynamical compression developed in the early stage of the collision. (author)

  14. Study of the multifragmentation in central collisions of the system: {sup 129}Xe + {sup nat}Sn between 32 and 50 MeV/A: measurement of collective expansion energy and of freeze-out volume; Etude de la multifragmentation dans les collisions centrales pour le systeme {sup 129}Xe + {sup nat}Sn entre 32 et 50 MeV/A: mesure de l`energie collective d`expansion et du volume de freeze-out

    Energy Technology Data Exchange (ETDEWEB)

    Salou, S

    1997-12-05

    The multifragmentation of the nuclear system formed in the central collisions of the Xe+Sn reaction between 32 and 50 MeV/A has been studied with the INDRA detector. A tensorial analyse is used to select central collisions. An important part of the charge (about 85 %) is isotropically emitted. The charge partitions have the characteristics of a simultaneous multiple fragment emission. The shape of the fragment kinetic energy distributions together with the reduced velocity correlation functions indicate that the fragmentation is a simultaneous process that occurs at low density. A comparison between the experimental data and predictions of the statistical multifragmentation model of Copenhagen (SMM) shows that charge partitions agree with the hypothesis of a thermodynamical equilibrium, whereas, kinetic observables are more constraining to the model and difficult to reproduce. However, fragments correlation functions are used in order to extract the freeze-out volume and the collective radial energy. At 50 MeV/A, the freeze-out volume is estimated to be 2.7 times the normal volume. It decreases with incident energy to be nearly twice the normal volume at 32 MeV/S. The collective energy evolves from 0 to 1.3 MeV/A with the bombarding energy. This expansion is not purely thermal and originates probably from a dynamical compression developed in the early stage of the collision. (author) 88 refs.

  15. Gold multifragmentation: Analysis of an exclusive experiment

    International Nuclear Information System (INIS)

    Aichelin, J.; Campi, X.

    1986-01-01

    We analyze completely exclusive 1 GeV/nucleon gold-emulsion reaction data with special emphasis on quantities which may help to settle the unsolved problem of which reaction mechanism produces the multifragmentation of heavy nuclei. We present results on correlations between target fragments and projectile fragments and among projectile fragments. In particular, we present for the first time the evolution of the mass yield distribution with the violence of the collisions which is characterized by the number of Z = 1 particles. We find that events producing Z = 2 particles have a different signature than those producing medium mass fragments. This shows that the agreement of the data with theories describing the inclusive mass yield by a single process: like a liquid gas phase transition: is accidental

  16. Multifragmentation of a very heavy nuclear system (I): selection of single-source events

    Energy Technology Data Exchange (ETDEWEB)

    Frankland, J.D.; Bacri, Ch.O.; Borderie, B. [Paris-11 Univ., Inst. de Physique Nucleaire, 91 - Orsay (France)] [and others

    2000-07-01

    A sample of 'single-source' events, compatible with the multifragmentation of very heavy fused systems, are isolated among well-measured {sup 155}Gd + {sup nat}U 36 A.MeV reactions by examining the evolution of the kinematics of fragments with Z {>=} 5 as a function of the dissipated energy and loss of memory of the entrance channel. Single-source events are found to be the result of very central collisions. Such central collisions may also lead to multiple fragment emission due to the decay of excited projectile- and target-like nuclei and so-called 'neck' emission, and for this reason the isolation of single-source events is very difficult. Event-selection criteria based on centrality of collisions, or on the isotropy of the emitted fragments in each event, are found to be inefficient to separate the two mechanisms, unless they take into account the redistribution of fragments' kinetic energies into directions perpendicular to the beam axis. The selected events are good candidates to look for bulk effects in the multifragmentation process. (authors)

  17. Multifragmentation of a very heavy nuclear system (I): selection of single-source events

    International Nuclear Information System (INIS)

    Frankland, J.D.; Bacri, Ch.O.; Borderie, B.; Rivet, M.F.; Squalli, M.; Auger, G.; Bellaize, N.; Bocage, F.; Bougault, R.; Brou, R.; Buchet, Ph.; Chbihi, A.; Colin, J.; Cussol, D.; Dayras, R.; Demeyer, A.; Dore, D.; Durand, D.; Galichet, E.; Genouin-Duhamel, E.; Gerlic, E.; Guinet, D.; Lautesse, Ph.; Laville, J.L.; Lecolley, J.F.; Legrain, R.; Le Neindre, N.; Lopez, O.; Louvel, M.; Maskay, A.M.; Nalpas, L.; Nguyen, A.D.; Parlog, M.; Peter, J.; Plagnol, E.; Rosato, E.; Saint-Laurent, F.; Salou, S.; Steckmeyer, J.C.; Stern, M.; Tabacaru, G.; Tamain, B.; Tirel, O.; Tassan-Got, L.; Vient, E.; Volant, C.; Wieleczko, J.P.

    2001-01-01

    A sample of 'single-source' events, compatible with the multifragmentation of very heavy fused systems, are isolated among well-measured 155 Gd+ nat U 36 A MeV reactions by examining the evolution of the kinematics of fragments with Z≥5 as a function of the dissipated energy and loss of memory of the entrance channel. Single-source events are found to be the result of very central collisions. Such central collisions may also lead to multiple fragment emission due to the decay of excited projectile- and target-like nuclei and so-called 'neck' emission, and for this reason the isolation of single-source events is very difficult. Event-selection criteria based on centrality of collisions, or on the isotropy of the emitted fragments in each event, are found to be inefficient to separate the two mechanisms, unless they take into account the redistribution of fragments' kinetic energies into directions perpendicular to the beam axis. The selected events are good candidates to look for bulk effects in the multifragmentation process

  18. Thermodynamical aspects of the Xe + Sn multifragmentation from 32 to 50 A.MeV

    International Nuclear Information System (INIS)

    Le Neindre, N.

    1999-10-01

    Central collisions between heavy-ions at intermediate energies are an ideal tool for studying nuclear matter far away from its saturation state, as well in temperature as in density. The multidetector INDRA has allowed us to select in the reactions Xe + Sn from 32 to 50 A.MeV a set of events which corresponds to the formation of unique source of excited and compressed nuclear matter which subsequently breaks. This unique source selection allows us to neglect the entrance channel and then to study this system under a view of thermodynamical equilibrium. The features of the fragments are compatible with the results of a statistical model which supposes the system's equilibration. However it is necessary, in order to reproduce the characteristics of light particles to take into account for the time dependence of the deexcitation by considering that part of them could be emitted or escaped during the expansion stage before multifragmentation of the unique source. This particles should explain the high part of the energy distributions observed experimentally for the protons, deutons, tritons and helium 3. Finally we have point out for this kind of heavy-ions collisions, leading to the formation of unique sources, a phase transition for the nuclear matter equivalent to a liquid-gas transition in macroscopic fluids. (authors)

  19. Similarity of multi-fragmentation of residual nucleus created in nucleus-nucleus interactions at high energies

    International Nuclear Information System (INIS)

    Abdel-Hafiez, A.; Chernyavski, M.M.; Orlova, G.I.; Gulamov, K.G.; Navotny, V.SH.; Uzhinskii, V.V.

    2000-01-01

    Experimental data on multi-fragmentation of residual krypton nuclei created in the interactions of the krypton nuclei with photoemulsion nuclei ut energy of 0.9 GeV per nucleon are presented in a comparison with the analogous data on fragmentation of gold residual nuclei at the energy of 10.7 GeV/nucleon. It is shown for the first time that there are two regimes of nuclear multifragmentation: the former is when less than one-half of nucleons of projectile nucleus are knocked out, the later is when more than one-half of nucleons are knocked out. Residual nuclei with closed masses created at different reactions are fragmenting practically simultaneously when more than one-half of nucleons of original nuclei are knocked out. The evidence of existence of a radial flow of the spectator fragment at the decay of residual krypton nuclei is found

  20. Collective motion and multifragmentation in the central collisions of the Xe+Sn System at 50 MeV/nucleon

    International Nuclear Information System (INIS)

    Marie, Nathalie

    1995-01-01

    This work concerns the search of an experimental signature of a collective motion in the process of the multifragmentation of hot nuclei formed in the central collisions of the Xe+Sn System at 50 MeV/nucleon. The measurements have been performed with the multidetector INDRA. The central collisions have been selected by means of a shape variable using momenta of the fragments. The spectra and the charge distributions of the products show that most of the charged particles and fragments comes from the disintegration of an equilibrated source. A minimum value of 40 mb is given for the cross section of such a mechanism. The charge (79) and the excitation energy (12.5 MeV/nucleon) of the equilibrated source have been determined by counting and with the calorimetry method. Comparison to the predictions of a sequential binary model and a phenomenological simultaneous model have been made. Only the simultaneous disintegration is able to explain the distribution of the relative angle between fragments. The energy spectra of the fragment are correctly reproduced assuming a collective energy of 2 MeV/nucleon. No dependence of this collective energy is found respect to the charge of the fragment. However, this process of a multifragmentation with a collective energy fails to explain the properties of the light charged particles. (author) [fr

  1. Explosive multifragmentation in the reaction {sup 32} S+{sup 27} Al at 37.5 MeV/nucleon; Multifragmentation explosive dans la reaction {sup 32}S+{sup 27}Al a 37.5 MeV/nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Chabane, A

    1995-06-01

    The study of complete events, in the reaction {sup 32}S+{sup 27}Al at 37.5 MeV/nucleon, provides evidence for multifragmentation of the composite nucleus formed by fusion of the projectile-target system. The percolation theory, innovative idea in nuclear physics, shows that our data are ``overcritical`` in the percolation sense. However with data at only one incident energy we cannot affirm the existence of a nuclear phase transition. Our data are incompatible with a binary sequential decay process (GEMINI code). Comparisons have also been made with predictions of a microcanonical multifragmentation decay process (Berlin code). Dynamical aspects of the experimental data are not consistent with the calculations. The discrepancy is most obvious when we compare the spectrum of the square of the momentum of the largest fragment and the correlation function of the reduced relative velocity of pairs of intermediate mass fragments. The use of the Boltzmann formula in the context of the canonical ensemble in order to study the charge partitions provides a good fit to the experimental data. The temperature deduced from the measured partition probabilities is 5 MeV and the freeze out volume corresponds to a density of approximately one twentieth of the normal value. The dynamical variables are very well reproduced by an explosive multifragmentation simulation code. The collective energy deduced can be understood as the consequence of a compressional effect which produces a shock wave when the compression velocity exceeds the sound velocity in nuclear matter. The measure of the collective expansion energy allows to estimate a value of the incompressibility coefficient which characterizes nuclear matter. (author). 51 refs., 73 figs., 9 tabs.

  2. Experimental determination of fragment excitation energies in multifragmentation events

    Energy Technology Data Exchange (ETDEWEB)

    Marie, N.; Natowitz, J.B. [Texas A and M Univ., College Station, TX (United States). Cyclotron Inst.; Chbihi, A.; Le Fevre, A.; Salou, S.; Wieleczko, J.P.; Gingras, L.; Auger, G. [Grand Accelerateur National d`Ions Lourds, 14 - Caen (France); Assenard, M. [Nantes Univ., 44 (France); Bacri, Ch.O. [Centre National de la Recherche Scientifique, CNRS, 91 - Orsay (France)] [and others

    1998-03-17

    For 50 MeV/nucleon {sup 129}Xe + {sup nat}Sn multifragmentation events, by means of correlation techniques, the multiplicities of the hydrogen and helium isotopes which were emitted by the hot primary excited fragments produced at the stage of the disassembly of an equilibrated hot source are determined. The relative kinetic energy distributions between the primary clusters and the light charged particles that they evaporate are also derived. From the comparison between the secondary multiplicities observed experimentally and the multiplicities predicted by the GEMINI model, it is concluded that the source breaks into primary fragments which are characterized by the same N/Z ratio as the combined system. Knowing the secondary light charged particle multiplicities and kinetic energies, the average charges of the hot fragments and are reconstructed their mean excitation energies are estimated. The fragment excitation energies are equal to 3.0 MeV/nucleon for the full range of intermediate mass fragment atomic number. This global constancy indicates that, on the average, thermodynamical equilibrium was achieved at the disassembly stage of the source. (author) 25 refs.

  3. Experimental determination of fragment excitation energies in multifragmentation events

    International Nuclear Information System (INIS)

    Marie, N.; Natowitz, J.B.; Assenard, M.; Bacri, Ch.O.

    1998-01-01

    For 50 MeV/nucleon 129 Xe + nat Sn multifragmentation events, by means of correlation techniques, the multiplicities of the hydrogen and helium isotopes which were emitted by the hot primary excited fragments produced at the stage of the disassembly of an equilibrated hot source are determined. The relative kinetic energy distributions between the primary clusters and the light charged particles that they evaporate are also derived. From the comparison between the secondary multiplicities observed experimentally and the multiplicities predicted by the GEMINI model, it is concluded that the source breaks into primary fragments which are characterized by the same N/Z ratio as the combined system. Knowing the secondary light charged particle multiplicities and kinetic energies, the average charges of the hot fragments and are reconstructed their mean excitation energies are estimated. The fragment excitation energies are equal to 3.0 MeV/nucleon for the full range of intermediate mass fragment atomic number. This global constancy indicates that, on the average, thermodynamical equilibrium was achieved at the disassembly stage of the source. (author)

  4. Kinetic energies of charged fragments resulting from multifragmentation and asymmetric fission of the C60 molecule in collisions with monocharged ions (2-130 keV)

    International Nuclear Information System (INIS)

    Rentenier, A; Bordenave-Montesquieu, D; Moretto-Capelle, P; Bordenave-Montesquieu, A

    2003-01-01

    Multifragmentation and asymmetric fission (AF) of the C 60 molecule induced by H + , H 2 + , H 3 + and He + ions at medium collision energies (2-130 keV) are considered. Momenta and kinetic energies of C n + fragment ions (n = 1- 12) are deduced from an analysis of time-of-flight spectra. In multifragmentation processes, momenta are found to be approximately constant when n > 2, a behaviour which explains that the most probable kinetic energy, as well as the width of the kinetic energy distributions, is found to be inversely proportional to the fragment size n; both momenta and kinetic energies are independent of the velocity and nature of the projectile, and hence of the energy deposit. A specific study of the AF shows that the kinetic energies of C 2 + , C 4 + and C 6 + fragments are also independent of the collision velocity and projectile species; a quantitative agreement is found with values deduced from kinetic energy release measurements by another group in electron impact experiments, and the observed decrease when the mass of the light fragment increases is also reproduced. A quantitative comparison of AF and multifragmentation for the n = 2, 4 and 6 fragment ions shows that kinetic energies in AF exceed that in multifragmentation, a result which explains the oscillations observed when momenta or kinetic energies of fragments are plotted against the n-value. The AF yield is also found to scale with the energy deposit in the collision velocity range extending below the velocity at the maximum of the electronic stopping power; except for protons, it remains negligible with respect to multifragmentation as soon as the total energy deposit exceeds about 100 eV

  5. Relevance of equilibrium in multifragmentation

    International Nuclear Information System (INIS)

    Furuta, Takuya; Ono, Akira

    2009-01-01

    The relevance of equilibrium in a multifragmentation reaction of very central 40 Ca + 40 Ca collisions at 35 MeV/nucleon is investigated by using simulations of antisymmetrized molecular dynamics (AMD). Two types of ensembles are compared. One is the reaction ensemble of the states at each reaction time t in collision events simulated by AMD, and the other is the equilibrium ensemble prepared by solving the AMD equation of motion for a many-nucleon system confined in a container for a long time. The comparison of the ensembles is performed for the fragment charge distribution and the excitation energies. Our calculations show that there exists an equilibrium ensemble that well reproduces the reaction ensemble at each reaction time t for the investigated period 80≤t≤300 fm/c. However, there are some other observables that show discrepancies between the reaction and equilibrium ensembles. These may be interpreted as dynamical effects in the reaction. The usual static equilibrium at each instant is not realized since any equilibrium ensemble with the same volume as that of the reaction system cannot reproduce the fragment observables

  6. Extraction of the power law exponent for 1 GeV/nucleon Au + C projectile multifragmentation

    International Nuclear Information System (INIS)

    Gilkes, M.L.; Elliott, J.B.; Huager, A.; Hirsch, A.S.; Hjort, E.

    1993-01-01

    Using moments of the measured charge distribution in exclusive gold multifragmentation events, we present a preliminary determination of the power law exponent τ. For a system undergoing a phase transition near the critical point, τ governs the cluster size distribution and is expected on rather general grounds to lie in the range 2 < τ < 3

  7. Contribution to the study of molecular multi-ionisation and multifragmentation in strong laser field

    International Nuclear Information System (INIS)

    Hering, P.

    1999-12-01

    Molecular multi-ionization in strong laser field is studied using different experimental and theoretical techniques. In the 10 13 -10 16 W/cm 2 laser intensity range, the strong non-linear laser-molecule coupling allows the absorption of energies necessary to the ejection of valence electrons. The double ionization is characterized by the production of doubly charged molecular ions and by charge separation channels such as A + + B + . For molecular charge states greater than two, the multi-ionization dynamics study is based on the observables due to the multifragmentation, which are the fragments charge states and initial momenta. For strong intensities in the 1015-1016 w/cm 2 range, the multicharged atomic ions production efficiency depends on the initial electronic density localization of the molecule. For intensities less than 5 x 10 14 w/cm 2 , double ionization leads to the simultaneous emission of two electrons from the molecule. The two-missing electrons fragmentation channels appear at internuclear equilibrium distance following the Franck-Condon principle. For more than two-missing electrons channels, the internuclear distance of excitation is more difficult to determine. However the reported different experiments show that the multifragmentation dynamics is independent of the electronic emission dynamics. The theoretical approach is based on the Thomas-Fermi equations and allows a non-perturbative description of the laser-molecule coupling. The calculated fragmentation kinetic energies are smaller than the coulombic repulsion energies calculated at the internuclear equilibrium distance because of an electronic screening effect. This model reproduce the experimental fragmentation 'energy releases obtained experimentally for molecules such as N 2 , CO 2 or N 2 O. (author)

  8. Kinetic energies of charged fragments resulting from multifragmentation and asymmetric fission of the C{sub 60} molecule in collisions with monocharged ions (2-130 keV)

    Energy Technology Data Exchange (ETDEWEB)

    Rentenier, A; Bordenave-Montesquieu, D; Moretto-Capelle, P; Bordenave-Montesquieu, A [Laboratoire CAR-IRSAMC, UMR 5589 CNRS - Universite Paul Sabatier, 118 Route de Narbonne, 31062 Toulouse Cedex (France)

    2003-04-28

    Multifragmentation and asymmetric fission (AF) of the C{sub 60} molecule induced by H{sup +}, H{sub 2}{sup +}, H{sub 3}{sup +} and He{sup +} ions at medium collision energies (2-130 keV) are considered. Momenta and kinetic energies of C{sub n}{sup +} fragment ions (n = 1- 12) are deduced from an analysis of time-of-flight spectra. In multifragmentation processes, momenta are found to be approximately constant when n > 2, a behaviour which explains that the most probable kinetic energy, as well as the width of the kinetic energy distributions, is found to be inversely proportional to the fragment size n; both momenta and kinetic energies are independent of the velocity and nature of the projectile, and hence of the energy deposit. A specific study of the AF shows that the kinetic energies of C{sub 2}{sup +}, C{sub 4}{sup +} and C{sub 6}{sup +} fragments are also independent of the collision velocity and projectile species; a quantitative agreement is found with values deduced from kinetic energy release measurements by another group in electron impact experiments, and the observed decrease when the mass of the light fragment increases is also reproduced. A quantitative comparison of AF and multifragmentation for the n = 2, 4 and 6 fragment ions shows that kinetic energies in AF exceed that in multifragmentation, a result which explains the oscillations observed when momenta or kinetic energies of fragments are plotted against the n-value. The AF yield is also found to scale with the energy deposit in the collision velocity range extending below the velocity at the maximum of the electronic stopping power; except for protons, it remains negligible with respect to multifragmentation as soon as the total energy deposit exceeds about 100 eV.

  9. Nuclear multifragmentation by 700–1500 MeV photons: New data of GRAAL experiment

    Energy Technology Data Exchange (ETDEWEB)

    Nedorezov, V. G., E-mail: vladimir@cpc.inr.ac.ru; Lapik, A. M. [Russian Academy of Sciences, Institute for Nuclear Research (Russian Federation); Collaboration: GRAAL Collaboration

    2015-12-15

    The cross sections of carbon nucleus photodisintegration into protons and neutrons with high multiplicity for photon energies from 700 to 1500 MeV were measured. The experiment was performed at the tagged photon beam of the GRAAL setup using the wide-aperture detector LAGRANγE. It was shown that multifragmentation up to complete disintegration into separate nucleons is initiated by elementary reactions of meson photoproduction with a subsequent intranuclear cascade.

  10. Use of a time-projection chamber in multifragmentation experiments at the BEVALAC

    International Nuclear Information System (INIS)

    Porile, N.T.

    1991-01-01

    An exclusive study of multifragmentation is described. The moments of the fragment charge distribution are used to extract the critical exponents associated with the phase transition to which the breakup is ascribed. The fragmentation of 1 GeV/nucleon La and Au is studied by reverse kinematics using a carbon target. Fragments with Z ≤ 6 will be identified with the EOS time projection chamber (TPC) while heavier fragments will be identified with a multiple sampling ionization chamber (MUSIC). The experimental setup using these detectors will be described

  11. Multifragmentation in intermediate energy 129Xe-induced heavy-ion reactions

    International Nuclear Information System (INIS)

    Tso, Kin.

    1996-05-01

    The 129 Xe-induced reactions on nat Cu, 89 Y, 165 Ho, and 197 Au at bombarding energies of E/A = 40 ampersand 60 MeV have been studied theoretically and experimentally in order to establish the underlying mechanism of multifragmentation at intermediate energy heavy-Ion collisions. Nuclear disks formed in central heavy-ion collisions, as simulated by means of Boltzmann-like kinetic equations, break up into several fragments due to a new kind of Rayleigh-like surface instability. A sheet of liquid, stable in the limit of non-interacting surfaces, is shown to become unstable due to surface-surface interactions. The onset of this instability is determined analytically. A thin bubble behaves like a sheet and is susceptible to the surface instability through the crispation mode. The Coulomb effects associated with the depletion of charges in the central cavity of nuclear bubbles are investigated. The onset of Coulomb instability is demonstrated for perturbations of the radial mode. Experimental intermediate-mass-fragment multiplicity distributions for the 129 Xe-induced reactions are shown to be binomial at each transverse energy. From these distributions, independent of the specific target, an elementary binary decay probability p can be extracted that has a thermal dependence. Thus it is inferred that multifragmentation is reducible to a combination of nearly independent emission processes. If sequential decay is assumed, the increase of p with transverse energy implies a contraction of the emission time scale. The sensitivity of p to the lower Z threshold in the definition of intermediate-mass-fragments points to a physical Poisson simulations of the particle multiplicities show that the weak auto-correlation between the fragment multiplicity and the transverse energy does not distort a Poisson distribution into a binomial distribution. The effect of device efficiency on the experimental results has also been studied

  12. Detection of simultaneous processes leading to multifragmentation in central collisions of the system 86Kr + 197Au, at 60 MeV per nucleon

    International Nuclear Information System (INIS)

    Lopez, O.

    1993-06-01

    Multi-fragments production in intermediate-energy heavy ion collisions is the most important deexcitation process for hot nuclei produced in small impact-parameter collisions (central collisions). The study of the 60 A.MeV Kr+Au system, performed at GANIL with a 4π and low threshold experimental setup including the fragments and light charged particles multidetectors DELF, XYZt, MUR and TONNEAU associated with 6 solid state detectors, has allowed to measure the global characteristics of dissipative reactions leading to a large amount (up to 6) of fragments with charge greater than 5. The events analysis has been performed by using global variables and has shown that there was the formation of a equilibrated composite system when one looked at kinematical observables (velocities) in the center-of-mass frame. The use of two models, one corresponding to multifragmentation, the other to sequential binary fission, has allowed to conclude for the existence of a simultaneous breakup of the composite system (multifragmentation). Last, no effect related to nuclear matter compressibility (compression/expansion stage) has been found for the most important part of the fragments (charge greater than 10). (orig.)

  13. Study of the nuclear multifragmentation: recent results obtained with the INDRA detector in the intermediate energy domain

    International Nuclear Information System (INIS)

    Saint-Laurent, F.

    1994-01-01

    The new 4π multidetector INDRA, designed for the study of hot nuclear systems decaying by multifragmentation, is available for experiments since the beginning of 1993. First results emphasize its very high detection capabilities. Preliminary results on multiplicity distributions and elemental charge distributions in the most violent collisions for the 36 Ar + 58 Ni and 129 Xe + nat Sn systems are presented. (author) 5 figs., 22 refs

  14. Multifragmentation in intermediate energy 129Xe-induced heavy-ion reactions

    Energy Technology Data Exchange (ETDEWEB)

    Tso, Kin [Univ. of California, Berkeley, CA (United States)

    1996-05-01

    The 129Xe-induced reactions on natCu, 89Y, 165Ho, and 197Au at bombarding energies of E/A = 40 & 60 MeV have been studied theoretically and experimentally in order to establish the underlying mechanism of multifragmentation at intermediate energy heavy-Ion collisions. Nuclear disks formed in central heavy-ion collisions, as simulated by means of Boltzmann-like kinetic equations, break up into several fragments due to a new kind of Rayleigh-like surface instability. A sheet of liquid, stable in the limit of non-interacting surfaces, is shown to become unstable due to surface-surface interactions. The onset of this instability is determined analytically. A thin bubble behaves like a sheet and is susceptible to the surface instability through the crispation mode. The Coulomb effects associated with the depletion of charges in the central cavity of nuclear bubbles are investigated. The onset of Coulomb instability is demonstrated for perturbations of the radial mode. Experimental intermediate-mass-fragment multiplicity distributions for the 129Xe-induced reactions are shown to be binomial at each transverse energy. From these distributions, independent of the specific target, an elementary binary decay probability p can be extracted that has a thermal dependence. Thus it is inferred that multifragmentation is reducible to a combination of nearly independent emission processes. If sequential decay is assumed, the increase of p with transverse energy implies a contraction of the emission time scale. The sensitivity of p to the lower Z threshold in the definition of intermediate-mass-fragments points to a physical Poisson simulations of the particle multiplicities show that the weak auto-correlation between the fragment multiplicity and the transverse energy does not distort a Poisson distribution into a binomial distribution. The effect of device efficiency on the experimental results has also been studied.

  15. Relativistic nuclear collisions from the EOS experiment at the Bevalac: collective observables and multifragmentation

    International Nuclear Information System (INIS)

    Insolia, A.

    1996-01-01

    The EOS Collaborations has completed an exclusive study of relativistic heavy ion collisions at the Bevalac using a variety of projectile, target and beam energy combinations. We report here results on directed sidewards flow in Au+Au between 0.25 AGeV and 1.2 AGeV, using a standard in-plane transverse momentum analysis. We also report on projectile fragmentation of Au in C at 1 AGeV. An analysis of fluctuations and critical exponents for small systems seems to support the idea that the multifragmentation regime is associated with a liquid gas phase transition in nuclear matter. (authors)

  16. Dynamical instabilities in hot expanding nuclear systems: a microscopic approach to the understanding of multifragmentation

    International Nuclear Information System (INIS)

    Suraud, E.

    1989-01-01

    We present a microscopic study of the quasi-fusion/explosion transition in the framework of Landau-Vlasov simulations and for intermediate energy heavy-ion collisions (beam energy from 10 to 100 MeV/A). After a short presentation of the results of schematic calculations, which furnish a guideline for microscopic investigations, we discuss the relevance of our approach for studying multifragmentation. Once the limitations of this kind of dynamical simulations exhibited, we perform a detailed analysis in terms of the equation of state of the system. In agreement with schematic models we find that the composite nuclear system formed in the collision actually explodes when it stays long enough in the mechanically unstable region (spinodal region). Quantitative estimates of the explosion threshold are given for central symmetric reactions (Ca + Ca and Ar + Ti). The link of the results with transport properties and the equation of state of nuclear matter are briefly discussed

  17. Breakup conditions of projectile spectators from dynamical observables

    Energy Technology Data Exchange (ETDEWEB)

    Begemann-Blaich, M.; Lindenstruth, V.; Pochodzalla, J. [and others

    1998-03-01

    Momenta and masses of heavy projectile fragments (Z {>=} 8), produced in collisions of {sup 197}Au with C, Al, Cu and Pb targets at E/A=600 MeV, were determined with the ALADIN magnetic spectrometer at SIS. Using these informations, an analysis of kinematic correlations between the two and three heaviest projectile fragments in their rest frame was performed. The sensitivity of these correlations to the conditions at breakup was verified within the schematic SOS-model. For a quantitative investigation, the data were compared to calculations with statistical multifragmentation models and to classical three-body calculations. With classical trajectory calculations, where the charges and masses of the fragments are taken from a Monte Carlo sampling of the experimental events, the dynamical observables can be reproduced. The deduced breakup parameters, however, differ considerably from those assumed in the statistical multifragmentation models which describe the charge correlations. If, on the other hand, the analysis of kinematic and charge correlations is performed for events with two and three heavy fragments produced by statistical multifragmentation codes, a good agreement with the data is found with the exception that the fluctuation widths of the intrinsic fragment energies are significantly underestimated. A new version of the multifragmentation code MCFRAG was therefore used to investigate the potential role of angular momentum at the breakup stage. If a mean angular momentum of 0.75 {Dirac_h}/nucleon is added to the system, the energy fluctuations can be reproduced, but at the same time the charge partitions are modified and deviate from the data. (orig.)

  18. Breakup conditions of projectile spectators from dynamical observables

    International Nuclear Information System (INIS)

    Begemann-Blaich, M.; Lindenstruth, V.; Pochodzalla, J.

    1998-03-01

    Momenta and masses of heavy projectile fragments (Z ≥ 8), produced in collisions of 197 Au with C, Al, Cu and Pb targets at E/A=600 MeV, were determined with the ALADIN magnetic spectrometer at SIS. Using these informations, an analysis of kinematic correlations between the two and three heaviest projectile fragments in their rest frame was performed. The sensitivity of these correlations to the conditions at breakup was verified within the schematic SOS-model. For a quantitative investigation, the data were compared to calculations with statistical multifragmentation models and to classical three-body calculations. With classical trajectory calculations, where the charges and masses of the fragments are taken from a Monte Carlo sampling of the experimental events, the dynamical observables can be reproduced. The deduced breakup parameters, however, differ considerably from those assumed in the statistical multifragmentation models which describe the charge correlations. If, on the other hand, the analysis of kinematic and charge correlations is performed for events with two and three heavy fragments produced by statistical multifragmentation codes, a good agreement with the data is found with the exception that the fluctuation widths of the intrinsic fragment energies are significantly underestimated. A new version of the multifragmentation code MCFRAG was therefore used to investigate the potential role of angular momentum at the breakup stage. If a mean angular momentum of 0.75 ℎ/nucleon is added to the system, the energy fluctuations can be reproduced, but at the same time the charge partitions are modified and deviate from the data. (orig.)

  19. Exactly solvable models: the way towards a rigorous treatment of phase transitions in finite systems

    International Nuclear Information System (INIS)

    Bugaev, K.A.

    2007-01-01

    The exact analytical solutions of a variety of statistical models recently obtained for finite systems are thoroughly discussed. Among them are a constrained version of the statistical multifragmentation model, the Bag Model of Gases and the Hills and Dales Model of surface partition. The finite volume analytical solutions of these models were obtained by a novel powerful mathematical method - the Laplace-Fourier transform. The Laplace-Fourier transform allows one to study the nuclear matter equation of state, the equation of state of hadronic and quark-gluon plasma and the surface entropy of large clusters on the same footing. A complete analysis of the isobaric partition singularities of these models is done for finite systems. The developed formalism allows one to exactly define the finite volume analogs of gaseous, liquid and mixed phases of these models from the first principles of statistical mechanics [ru

  20. Dynamical and statistical aspects of intermediate energy heavy ion collisions

    International Nuclear Information System (INIS)

    Knoll, J.

    1987-01-01

    The lectures presented deal with three different topics relevant for the discussion of nuclear collisions at medium to high energies. The first lecture concerns a subject of general interest, the description of statistical systems and their dynamics by the concept of missing information. If presents an excellent scope to formulate statistical theories in such a way that they carefully keep track of the known (relevant) information while maximizing the ignorance about the irrelevant, unknown information. The last two lectures deal with quite actual questions of intermediate energy heavy-ion collisions. These are the multi-fragmentation dynamics of highly excited nuclear systems, and the so called subthreshold particle production. All three subjects are self-contained, and can be read without the knowledge about the other ones. (orig.)

  1. Comparison between cross sections, saddle point and scission point barriers for the 32S+24Mg reaction

    Directory of Open Access Journals (Sweden)

    Santos T. J.

    2014-04-01

    Full Text Available One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. The statistical multifragmentation model has been used for many years to describe the distribution of these fragments. An extension of the statistical multifragmentation model to include partial widths and lifetimes for emission, interprets the fragmentation process as the near simultaneous limit of a series of sequential binary decays. In this extension, the expression describing intermediate mass fragment emission is almost identical to that of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission. However, this is usually not considered a good approximation to the emission of heavier fragments. These emissions seem to be determined by the characteristics of the system at the saddle-point and its subsequent dynamical evolution rather than by the scission point. Here, we compare the barriers and decay widths of these different formulations of intermediate fragment emission and analyze the extent to which they remain distinguishable at high excitation energy.

  2. Comparison between cross sections, saddle point and scission point barriers for the {sup 32}S+{sup 24}Mg reaction

    Energy Technology Data Exchange (ETDEWEB)

    Santos, T.J.; Carlson, B.V., E-mail: nztiago@gmail.com [Instituto Tecnologia de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil)

    2014-07-01

    One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. The statistical multifragmentation model has been used for many years to describe the distribution of these fragments. An extension of the statistical multifragmentation model to include partial widths and lifetimes for emission, interprets the fragmentation process as the near simultaneous limit of a series of sequential binary decays. In this extension, the expression describing intermediate mass fragment emission is almost identical to that of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission. However, this is usually not considered a good approximation to the emission of heavier fragments. These emissions seem to be determined by the characteristics of the system at the saddle-point and its subsequent dynamical evolution rather than by the scission point. Here, we compare the barriers and decay widths of these different formulations of intermediate fragment emission and analyze the extent to which they remain distinguishable at high excitation energy. (author)

  3. Nuclear fragmentation reactions in extended media studied with Geant4 toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pshenichnov, Igor, E-mail: pshenich@fias.uni-frankfurt.d [Frankfurt Institute for Advanced Studies, J.-W. Goethe University, 60438 Frankfurt am Main (Germany); Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow (Russian Federation); Botvina, Alexander [Frankfurt Institute for Advanced Studies, J.-W. Goethe University, 60438 Frankfurt am Main (Germany); Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow (Russian Federation); Mishustin, Igor [Frankfurt Institute for Advanced Studies, J.-W. Goethe University, 60438 Frankfurt am Main (Germany); Kurchatov Institute, Russian Research Center, 123182 Moscow (Russian Federation); Greiner, Walter [Frankfurt Institute for Advanced Studies, J.-W. Goethe University, 60438 Frankfurt am Main (Germany)

    2010-03-15

    It is well-known from numerous experiments that nuclear multifragmentation is a dominating mechanism for production of intermediate mass fragments in nucleus-nucleus collisions at energies above 100AMeV. In this paper we investigate the validity and performance of the Fermi break-up model and the statistical multifragmentation model implemented as parts of the Geant4 toolkit. We study the impact of violent nuclear disintegration reactions on the depth-dose profiles and yields of secondary fragments for beams of light and medium-weight nuclei propagating in extended media. Implications for ion-beam cancer therapy and shielding from cosmic radiation are discussed.

  4. Nuclear fragmentation reactions in extended media studied with Geant4 toolkit

    International Nuclear Information System (INIS)

    Pshenichnov, Igor; Botvina, Alexander; Mishustin, Igor; Greiner, Walter

    2010-01-01

    It is well-known from numerous experiments that nuclear multifragmentation is a dominating mechanism for production of intermediate mass fragments in nucleus-nucleus collisions at energies above 100AMeV. In this paper we investigate the validity and performance of the Fermi break-up model and the statistical multifragmentation model implemented as parts of the Geant4 toolkit. We study the impact of violent nuclear disintegration reactions on the depth-dose profiles and yields of secondary fragments for beams of light and medium-weight nuclei propagating in extended media. Implications for ion-beam cancer therapy and shielding from cosmic radiation are discussed.

  5. Nuclear fragmentation

    International Nuclear Information System (INIS)

    Chung, K.C.

    1989-01-01

    An introduction to nuclear fragmentation, with emphasis in percolation ideas, is presented. The main theoretical models are discussed and as an application, the uniform expansion approximation is presented and the statistical multifragmentation model is used to calculate the fragment energy spectra. (L.C.)

  6. Liquid-fog and liquid-gas phase transitions in hot nuclei

    International Nuclear Information System (INIS)

    Karnaukhov, V.A.; ); Oeschier, H.; Budzanowski, A.

    2006-01-01

    Thermal multifragmentation of hot nuclei is interpreted as the nuclear liquid-fog phase transition inside the spinodal region. The exclusive data for p(8.1 GeV) + Au collisions are analyzed within the framework of statistical model (SSM). It is found that the partition hot nuclei is specified after expansion to a volume equal to V t = (2.6 ± 0.3)V 0 . The freeze-out volume is found to be twice as large V f = (5 ± 1)V 0 . The similarity between multifragmentation and ordinary fission is discussed [ru

  7. Dynamical and statistical aspects in nucleus-nucleus collisions around the Fermi energy

    Energy Technology Data Exchange (ETDEWEB)

    Tamain, B.; Bocage, F.; Bougault, R.; Brou, R. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Assenard, M. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees; Auger, G.; Benlliure, J. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Bacri, C.O.; Borderie, B. [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire; Bisquer, E. [Lyon-1 Univ., 69 - Villeurbanne (France). Inst. de Physique Nucleaire] [and others

    1997-12-31

    Nucleus-nucleus collisions at low incident energy are mainly governed by statistical dissipative processes, fusion and deep inelastic reactions being the most important ones. Conversely, in the relativistic energy regime, dynamical effects play a dominant role and one should apply a participant-spectator picture in order to understand the data. In between, the intermediate energy region is a transition one in which it is necessary to disentangle dynamics from statistical effects. Moreover, the Fermi energy region corresponds to available energies comparable with nuclear binding energies and one may except to observe phase transition effects. Experiments performed recently with 4{pi} devices have given quite new data and a much better insight into involved mechanisms and hot nuclear matter properties. INDRA data related to reaction mechanisms and multifragmentation are presented. (author) 53 refs.

  8. Dynamical and statistical aspects in nucleus-nucleus collisions around the Fermi energy

    International Nuclear Information System (INIS)

    Tamain, B.; Bocage, F.; Bougault, R.; Brou, R.; Bacri, C.O.; Borderie, B.; Bisquer, E.

    1997-01-01

    Nucleus-nucleus collisions at low incident energy are mainly governed by statistical dissipative processes, fusion and deep inelastic reactions being the most important ones. Conversely, in the relativistic energy regime, dynamical effects play a dominant role and one should apply a participant-spectator picture in order to understand the data. In between, the intermediate energy region is a transition one in which it is necessary to disentangle dynamics from statistical effects. Moreover, the Fermi energy region corresponds to available energies comparable with nuclear binding energies and one may except to observe phase transition effects. Experiments performed recently with 4π devices have given quite new data and a much better insight into involved mechanisms and hot nuclear matter properties. INDRA data related to reaction mechanisms and multifragmentation are presented. (author)

  9. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  10. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  11. Nuclear multifragmentation, its relation to general physics. A rich test ground of the fundamentals of statistical mechanics

    International Nuclear Information System (INIS)

    Gross, D.H.E.

    2006-01-01

    Heat can flow from cold to hot at any phase separation even in macroscopic systems. Therefore also Lynden-Bell's famous gravo-thermal catastrophe must be reconsidered. In contrast to traditional canonical Boltzmann-Gibbs statistics this is correctly described only by microcanonical statistics. Systems studied in chemical thermodynamics (ChTh) by using canonical statistics consist of several homogeneous macroscopic phases. Evidently, macroscopic statistics as in chemistry cannot and should not be applied to non-extensive or inhomogeneous systems like nuclei or galaxies. Nuclei are small and inhomogeneous. Multifragmented nuclei are even more inhomogeneous and the fragments even smaller. Phase transitions of first order and especially phase separations therefore cannot be described by a (homogeneous) canonical ensemble. Taking this serious, fascinating perspectives open for statistical nuclear fragmentation as test ground for the basic principles of statistical mechanics, especially of phase transitions, without the use of the thermodynamic limit. Moreover, there is also a lot of similarity between the accessible phase space of fragmenting nuclei and inhomogeneous multistellar systems. This underlines the fundamental significance for statistical physics in general. (orig.)

  12. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics that was given by Haldane admits a 'statistical interaction' between distinguishable particles (multispecies statistics). For such statistics, thermodynamic quantities can be evaluated exactly; explicit expressions are presented here for cluster coefficients. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models of the Calogero-Sutherland type. The interesting questions of generalizing this correspondence to the higher-dimensional and the multispecies cases remain essentially open; however, our results provide some hints as to searches for the models in question

  13. Statistical modelling with quantile functions

    CERN Document Server

    Gilchrist, Warren

    2000-01-01

    Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...

  14. A Statistical Programme Assignment Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Staghøj, Jonas; Svarer, Michael

    When treatment effects of active labour market programmes are heterogeneous in an observable way  across the population, the allocation of the unemployed into different programmes becomes a particularly  important issue. In this paper, we present a statistical model designed to improve the present...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the  plementation of such a system, especially the interplay between the statistical model and  case workers....

  15. Diffeomorphic Statistical Deformation Models

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus

    2007-01-01

    In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al....... The modifications ensure that no boundary restriction has to be enforced on the parameter space to prevent folds or tears in the deformation field. For straightforward statistical analysis, principal component analysis and sparse methods, we assume that the parameters for a class of deformations lie on a linear...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....

  16. Statistical modeling for degradation data

    CERN Document Server

    Lio, Yuhlong; Ng, Hon; Tsai, Tzong-Ru

    2017-01-01

    This book focuses on the statistical aspects of the analysis of degradation data. In recent years, degradation data analysis has come to play an increasingly important role in different disciplines such as reliability, public health sciences, and finance. For example, information on products’ reliability can be obtained by analyzing degradation data. In addition, statistical modeling and inference techniques have been developed on the basis of different degradation measures. The book brings together experts engaged in statistical modeling and inference, presenting and discussing important recent advances in degradation data analysis and related applications. The topics covered are timely and have considerable potential to impact both statistics and reliability engineering.

  17. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics, as given by Haldane, allows for a statistical interaction between distinguishable particles (multi-species statistics). The thermodynamic quantities for such statistics ca be evaluated exactly. The explicit expressions for the cluster coefficients are presented. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models. The interesting questions of generalizing this correspondence onto the higher-dimensional and the multi-species cases remain essentially open

  18. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  19. Universality of projectile fragmentation model

    International Nuclear Information System (INIS)

    Chaudhuri, G.; Mallik, S.; Das Gupta, S.

    2012-01-01

    Presently projectile fragmentation reaction is an important area of research as it is used for the production of radioactive ion beams. In this work, the recently developed projectile fragmentation model with an universal temperature profile is used for studying the charge distributions of different projectile fragmentation reactions with different projectile target combinations at different incident energies. The model for projectile fragmentation consists of three stages: (i) abrasion, (ii) multifragmentation and (iii) evaporation

  20. Classical model of intermediate statistics

    International Nuclear Information System (INIS)

    Kaniadakis, G.

    1994-01-01

    In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions

  1. A hybrid model for the investigation of heavy ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Heide, B.M.

    1995-09-01

    The following topics were dealt with: The coupling of the Botzmann-Uehling-Uhlenbeck (BUU) model with Kopenhagen multifragmentation model realising a new hybrid model, application on 197 Au+ 197 Au reactions between 100 and 250 A.MeV, calculation of the chracteristics of the fragmentation system including mass number, excitation energy, angular momenta, two-particle correlation function

  2. Probing NWP model deficiencies by statistical postprocessing

    DEFF Research Database (Denmark)

    Rosgaard, Martin Haubjerg; Nielsen, Henrik Aalborg; Nielsen, Torben S.

    2016-01-01

    The objective in this article is twofold. On one hand, a Model Output Statistics (MOS) framework for improved wind speed forecast accuracy is described and evaluated. On the other hand, the approach explored identifies unintuitive explanatory value from a diagnostic variable in an operational....... Based on the statistical model candidates inferred from the data, the lifted index NWP model diagnostic is consistently found among the NWP model predictors of the best performing statistical models across sites....

  3. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  4. Statistical Compression for Climate Model Output

    Science.gov (United States)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  5. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  6. Statistical modelling for ship propulsion efficiency

    DEFF Research Database (Denmark)

    Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole

    2012-01-01

    This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...

  7. Sensometrics: Thurstonian and Statistical Models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen

    . sensR is a package for sensory discrimination testing with Thurstonian models and ordinal supports analysis of ordinal data with cumulative link (mixed) models. While sensR is closely connected to the sensometrics field, the ordinal package has developed into a generic statistical package applicable......This thesis is concerned with the development and bridging of Thurstonian and statistical models for sensory discrimination testing as applied in the scientific discipline of sensometrics. In sensory discrimination testing sensory differences between products are detected and quantified by the use...... and sensory discrimination testing in particular in a series of papers by advancing Thurstonian models for a range of sensory discrimination protocols in addition to facilitating their application by providing software for fitting these models. The main focus is on identifying Thurstonian models...

  8. Statistical modelling for social researchers principles and practice

    CERN Document Server

    Tarling, Roger

    2008-01-01

    This book explains the principles and theory of statistical modelling in an intelligible way for the non-mathematical social scientist looking to apply statistical modelling techniques in research. The book also serves as an introduction for those wishing to develop more detailed knowledge and skills in statistical modelling. Rather than present a limited number of statistical models in great depth, the aim is to provide a comprehensive overview of the statistical models currently adopted in social research, in order that the researcher can make appropriate choices and select the most suitable model for the research question to be addressed. To facilitate application, the book also offers practical guidance and instruction in fitting models using SPSS and Stata, the most popular statistical computer software which is available to most social researchers. Instruction in using MLwiN is also given. Models covered in the book include; multiple regression, binary, multinomial and ordered logistic regression, log-l...

  9. Topology for statistical modeling of petascale data.

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio (University of Utah, Salt Lake City, UT); Mascarenhas, Ajith Arthur; Rusek, Korben (Texas A& M University, College Station, TX); Bennett, Janine Camille; Levine, Joshua (University of Utah, Salt Lake City, UT); Pebay, Philippe Pierre; Gyulassy, Attila (University of Utah, Salt Lake City, UT); Thompson, David C.; Rojas, Joseph Maurice (Texas A& M University, College Station, TX)

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  10. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  11. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  12. Multiplicity correlations of intermediate-mass fragments with pions and fast protons in 12C + 197AU

    International Nuclear Information System (INIS)

    Turzo, K.; Begemann-Blaich, M.L.; Auger, G.

    2003-12-01

    Low-energy π + (E π 12 C+ 197 Au collisions at incident energies from 300 to 1800 MeV per nucleon were detected with the Si-Si(Li)-CsI(Tl) calibration telescopes of the INDRA multidetector. The inclusive angular distributions are approximately isotropic, consistent with multiple rescattering in the target spectator. The multiplicity correlations of the low-energy pions and of energetic protons (E p >or ≤ 150 MeV) with intermediate-mass fragments were determined from the measured coincidence data. The deduced correlation functions 1 + R ∼ 1.3 for inclusive event samples reflect the strong correlations evident from the common impact-parameter dependence of the considered multiplicities. For narrow impact-parameter bins (based on charged-particle multiplicity), the correlation functions are close to unity and do not indicate strong additional correlations. Only for pions at high particle multiplicities (central collisions) a weak anticorrelation is observed, probably due to a limited competition between these emissions. Overall, the results are consistent with the equilibrium assumption made in statistical multifragmentation scenarios. Predictions obtained with intranuclear cascade models coupled to the statistical multifragmentation model are in good agreement with the experimental data. (orig.)

  13. Simple statistical model for branched aggregates

    DEFF Research Database (Denmark)

    Lemarchand, Claire; Hansen, Jesper Schmidt

    2015-01-01

    , given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments....... The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory......We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule...

  14. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  15. Statistical Model Checking of Rich Models and Properties

    DEFF Research Database (Denmark)

    Poulsen, Danny Bøgsted

    in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part...... motivates why existing model checking technology should be supplemented by new techniques. It also contains a brief introduction to probability theory and concepts covered by the six papers making up the second part. The first two papers are concerned with developing online monitoring techniques...... systems. The fifth paper shows how stochastic hybrid automata are useful for modelling biological systems and the final paper is concerned with showing how statistical model checking is efficiently distributed. In parallel with developing the theory contained in the papers, a substantial part of this work...

  16. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  17. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  18. New findings on the onset of thermal disassembly in spallation reactions; Nouvelles approches pour l'etude de la multifragmentation thermique dans la spallation

    Energy Technology Data Exchange (ETDEWEB)

    Napolitani, P

    2004-09-15

    Thermal multifragmentation is the process of multi body disassembly of a hot nucleus when the excitation is almost purely thermal i.e. dynamical effects like compression (characteristic of ion-ion collisions at Fermi energy) are negligible. Suited reactions are proton induced collision or ion-ion abrasion at relativistic incident energy. Thus we measured four systems at FRS (Fragment separator, GSI, Darmstadt) in inverse kinematics: Fe{sup 56}+p, Fe{sup 56}+Ti(nat), Xe{sup 136}+p, Xe{sup 136}+Ti(nat) a 1 A*GeV. The inverse kinematics allows to observe all particles without any threshold in energy. This is a great advantage compared to experiments in direct kinematics, because only in inverse kinematics it is possible to obtain complete velocity spectra (without a hole for low velocities) for fully identified isotopes. The complex shape of the velocity spectra allows to identify the different deexcitation channels and it clearly shows the transition from a chaotic-dominated process (Gaussian cloud in velocity space) to a direct Coulomb- (or eventually expansion-) dominated process (shell of a sphere in velocity space). Different possible descriptions of the reaction process are discussed, based either on asymmetric fission or multifragmentation. The resulting physical picture is especially interesting for the Fe{sup 56}+p, and Xe{sup 136}+p systems: proton induced collisions could result in the split of the system in two or more fragments due to a fast break-up process. In this case, the configuration of the break-up partition is very asymmetric. The discussion will be extended to other characteristics, like the restoring of nuclear structure features in the isotopic production and the temperature dependence of the isotopic composition of the residues. (author)

  19. Compound nucleus decay: Comparison between saddle point and scission point barriers

    Energy Technology Data Exchange (ETDEWEB)

    Santos, T. J.; Carlson, B. V. [Depto. de Física, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP (Brazil)

    2014-11-11

    One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. An extension of the statistical multifragmentation model has been developed, in which the process can be interpreted as the near simultaneous limit of a series of sequential binary decays. In this extension, intermediate mass fragment emissions are described by expressions almost identical to those of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission but not of the emission of heavier fragments, which seems to be determined by the transition density at the saddle-point rather than at the scission point. Here, we wish to compare these different formulations of intermediate fragmment emission and analyze the extent to which they remain distinguishable at high excitation energy.

  20. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  1. Statistical Models for Social Networks

    NARCIS (Netherlands)

    Snijders, Tom A. B.; Cook, KS; Massey, DS

    2011-01-01

    Statistical models for social networks as dependent variables must represent the typical network dependencies between tie variables such as reciprocity, homophily, transitivity, etc. This review first treats models for single (cross-sectionally observed) networks and then for network dynamics. For

  2. Functional summary statistics for the Johnson-Mehl model

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....

  3. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    Science.gov (United States)

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  4. Distributions with given marginals and statistical modelling

    CERN Document Server

    Fortiana, Josep; Rodriguez-Lallena, José

    2002-01-01

    This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.

  5. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  6. Structured statistical models of inductive reasoning.

    Science.gov (United States)

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  7. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  8. Multi-fragmentation of C60 induced by 4He2+ impact (E<60 keV/amu) and investigated by a multi-correlation technique

    International Nuclear Information System (INIS)

    Rentenier, A.; Moretto-Capelle, P.; Bordenave-Montesquieu, D.; Bordenave-Montesquieu, A.

    2003-01-01

    In this communication, the C 60 multi-fragmentation induced by 4 He 2+ ion impact in the 20-240 keV energy range, is investigated. Using a multi-stop time-of-flight technique, it becomes possible to measure partial spectra corresponding to the simultaneous emission of 2-5 light charged fragments; small charged fragments are found to be accompanied by the emission of at least another one. The fragment size distribution depends on the collisional energy and the multiplicity of emitted charged fragments. It is more peaked on small sizes when the collision velocity or the multiplicity increases. Corresponding relative cross sections are also measured; processes with emission of 2 and 3 charged fragments are always dominant but their relative weights decrease slowly when the collision energy increases

  9. The study of the phase structure of hadronic matter by searching for the deconfined quark-gluon phase transition using 2 TeV bar p-p collisions; and by searching for critical phenomena in an exclusive study of multifragmentation using 1 GeV/nucleon heavy ion collisions

    International Nuclear Information System (INIS)

    Scharenberg, R.; Hirsch, A.; Tincknell, M.

    1993-01-01

    This report discusses the Fermilab experiment E735 which is dedicated to the search for the quark-gluon plasma from proton-antiproton interactions; multifragmentation using the EOS-TPC; STAR R ampersand D; silicon avalanche diodes as direct time-of-flight detectors; and soft photons at the AGS-E855

  10. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    Science.gov (United States)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  11. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  12. Topology for Statistical Modeling of Petascale Data

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Bremer, P. -T. [Univ. of Utah, Salt Lake City, UT (United States)

    2013-10-31

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  13. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  14. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  15. Model-generated air quality statistics for application in vegetation response models in Alberta

    International Nuclear Information System (INIS)

    McVehil, G.E.; Nosal, M.

    1990-01-01

    To test and apply vegetation response models in Alberta, air pollution statistics representative of various parts of the Province are required. At this time, air quality monitoring data of the requisite accuracy and time resolution are not available for most parts of Alberta. Therefore, there exists a need to develop appropriate air quality statistics. The objectives of the work reported here were to determine the applicability of model generated air quality statistics and to develop by modelling, realistic and representative time series of hourly SO 2 concentrations that could be used to generate the statistics demanded by vegetation response models

  16. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  17. Fluctuations in the size of the largest projectile fragment produced in 1 GeV/nucleon Au + C collisions

    International Nuclear Information System (INIS)

    Warren, P.; Elliott, J.B.; Gilkes, M.L.; Hauger, A.; Hirsch, A.S.

    1993-01-01

    Large fluctuations in quantities such as density are characteristic of critical phenomena in the neighborhood of the critical point. Using the EOS apparatus at the Bevalac, we have performed an exclusive experiment in which the size of the largest projectile fragment produced in 1 GeV/nucleon Au+C collisions is studied as a function of the charged multiplicity of the event. A peak in the fluctuations is expected at the critical multiplicity. The data are compared to a percolation model and a statistical multifragmentation model

  18. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    Science.gov (United States)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  19. Statistical Models of Adaptive Immune populations

    Science.gov (United States)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  20. Tropical geometry of statistical models.

    Science.gov (United States)

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.

  1. 12th Workshop on Stochastic Models, Statistics and Their Applications

    CERN Document Server

    Rafajłowicz, Ewaryst; Szajowski, Krzysztof

    2015-01-01

    This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

  2. Correlations of intermediate mass fragments from Fe+Ta, Au, and Th collisions

    International Nuclear Information System (INIS)

    Sangster, T.C.; Begemann-Blaich, M.; Blaich, T.; Britt, H.C.; Hansen, L.F.; Namboodiri, M.N.; Peilert, G.

    1995-01-01

    Charge, velocity, and angular correlations between intermediate mass fragments (IMF) are presented for 50 and 100 MeV/nucleon Fe bombardments of Ta, Au, and Th targets. Correlation functions generated as a function of the relative velocity and the opening angle between two IMF's are qualitatively independent of the projectile energy and target mass and show a suppression at small relative velocities and opening angles due to the Coulomb repulsion between the fragments. The correlations are consistent with IMF's emitted primarily from a highly excited target residue following a rapid preequilibrium cascade. The correlation data are compared to model calculations using the event generator MENEKA and the quantum molecular dynamics (QMD) code with a statistical deexcitation of residual fragments utilizing the multifragmentation code SMM. All data are consistent with a simultaneous multifragmentation at a freeze-out density of 0.1--0.3 times normal nuclear matter density or a more sequential emission with time constant τ≤500 fm/c

  3. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  4. Multiple commodities in statistical microeconomics: Model and market

    Science.gov (United States)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  5. Statistical models for optimizing mineral exploration

    International Nuclear Information System (INIS)

    Wignall, T.K.; DeGeoffroy, J.

    1987-01-01

    The primary purpose of mineral exploration is to discover ore deposits. The emphasis of this volume is on the mathematical and computational aspects of optimizing mineral exploration. The seven chapters that make up the main body of the book are devoted to the description and application of various types of computerized geomathematical models. These chapters include: (1) the optimal selection of ore deposit types and regions of search, as well as prospecting selected areas, (2) designing airborne and ground field programs for the optimal coverage of prospecting areas, and (3) delineating and evaluating exploration targets within prospecting areas by means of statistical modeling. Many of these statistical programs are innovative and are designed to be useful for mineral exploration modeling. Examples of geomathematical models are applied to exploring for six main types of base and precious metal deposits, as well as other mineral resources (such as bauxite and uranium)

  6. Fluctuations of the single-particle density in nuclear dynamics

    International Nuclear Information System (INIS)

    Burgio, G.F.; Chomaz, P.; Randrup, J.

    1991-01-01

    In recent years semiclassical methods have been developed to study heavy-ion collisions in the framework of the Boltzmann-Uehling-Uhlenbeck theory, in which the collisionless mean field evolution has been augmented by a Pauli-blocked Nordheim collision term. Since these models describe the average dynamic trajectory, they cannot be applied to describe fluctuations of one-body observables, correlations in the emission of light particles and catastrophic processes like multifragmentation. The authors have developed a new method in order to include the stochastic part of the collision integral into BUU-type simulations of the nuclear dynamics. They apply this method to a two-dimensional gas of fermions on a torus, for which the time evolution of the mean trajectory and the associated correlation function are calculated; the variance of the phase-space occupancy follows closely the predictions of the corresponding Fokker-Planck equation and relaxes towards the appropriate quantum-statistical limit. The breaking of the translational and spherical symmetry in the model permits the study of unstable situations in phase-space. The introduction of the nonlinear one-body field allows them to explore dynamical instabilities and bifurcations. Therefore the model can be appropriate for studying nuclear multifragmentation

  7. Study of stochastic approaches of the n-bodies problem: application to the nuclear fragmentation; Etude des approches stochastiques du probleme a N corps: application a la multifragmentation nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Guarnera, A.

    1996-07-09

    In the last decade nuclear physics research has found, with the observation of phenomena such as multifragmentation or vaporization, the possibility to get a deeper insight into the nuclear matter phase diagram. For example, a spinodal decomposition scenario has been proposed to explain the multifragmentation: because of the initial compression, the system may enter a region, the spinodal zone, in which the nuclear matter is no longer stable, and so any fluctuation leads to the formation of fragments. This thesis deals with spinodal decomposition within the theoretical framework of stochastic mean filed approaches, in which the one-body density function may experience a stochastic evolution. We have shown that these approaches are able to describe phenomena, such as first order phase transitions, in which fluctuations and many-body correlations plan an important role. In the framework of stochastic mean-filed approaches we have shown that the fragment production by spinodal decomposition is characterized by typical time scales of the order of 100 fm/c and by typical size scales around the Neon mass. We have also shown that these features are robust and that they are not affected significantly by a possible expansion of the system or by the finite size of nuclei. We have proposed as a signature of the spinodal decomposition some typical partition of the largest fragments. The study and the comparison with experimental data, performed for the reactions Xe + Cu at 45 MeV/A and Xe + Sn at 50 MeV/A, have shown a remarkable agreement. Moreover we would like to stress that the theory does not contain any adjustable parameter. These results seem to give a strong indication of the possibility to observe a spinodal decomposition of nuclei. (author).

  8. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  9. Statistical transmutation in doped quantum dimer models.

    Science.gov (United States)

    Lamas, C A; Ralko, A; Cabra, D C; Poilblanc, D; Pujol, P

    2012-07-06

    We prove a "statistical transmutation" symmetry of doped quantum dimer models on the square, triangular, and kagome lattices: the energy spectrum is invariant under a simultaneous change of statistics (i.e., bosonic into fermionic or vice versa) of the holes and of the signs of all the dimer resonance loops. This exact transformation enables us to define the duality equivalence between doped quantum dimer Hamiltonians and provides the analytic framework to analyze dynamical statistical transmutations. We investigate numerically the doping of the triangular quantum dimer model with special focus on the topological Z(2) dimer liquid. Doping leads to four (instead of two for the square lattice) inequivalent families of Hamiltonians. Competition between phase separation, superfluidity, supersolidity, and fermionic phases is investigated in the four families.

  10. SDI: Statistical dynamic interactions

    International Nuclear Information System (INIS)

    Blann, M.; Mustafa, M.G.; Peilert, G.; Stoecker, H.; Greiner, W.

    1991-01-01

    We focus on the combined statistical and dynamical aspects of heavy ion induced reactions. The overall picture is illustrated by considering the reaction 36 Ar + 238 U at a projectile energy of 35 MeV/nucleon. We illustrate the time dependent bound excitation energy due to the fusion/relaxation dynamics as calculated with the Boltzmann master equation. An estimate of the mass, charge and excitation of an equilibrated nucleus surviving the fast (dynamic) fusion-relaxation process is used as input into an evaporation calculation which includes 20 heavy fragment exit channels. The distribution of excitations between residue and clusters is explicitly calculated, as is the further deexcitation of clusters to bound nuclei. These results are compared with the exclusive cluster multiplicity measurements of Kim et al., and are found to give excellent agreement. We consider also an equilibrated residue system at 25% lower initial excitation, which gives an unsatisfactory exclusive multiplicity distribution. This illustrates that exclusive fragment multiplicity may provide a thermometer for system excitation. This analysis of data involves successive binary decay with no compressional effects nor phase transitions. Several examples of primary versus final (stable) cluster decay probabilities for an A = 100 nucleus at excitations of 100 to 800 MeV are presented. From these results a large change in multifragmentation patterns may be understood as a simple phase space consequence, invoking neither phase transitions, nor equation of state information. These results are used to illustrate physical quantities which are ambiguous to deduce from experimental fragment measurements. 14 refs., 4 figs

  11. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  12. Model for neural signaling leap statistics

    International Nuclear Information System (INIS)

    Chevrollier, Martine; Oria, Marcos

    2011-01-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.

  13. Model for neural signaling leap statistics

    Science.gov (United States)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  14. WE-A-201-02: Modern Statistical Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Niemierko, A.

    2016-06-15

    Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the “big tent” vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that “Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]”. Don developed an interest in chemistry at school by “reading a book” - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear

  15. WE-A-201-02: Modern Statistical Modeling

    International Nuclear Information System (INIS)

    Niemierko, A.

    2016-01-01

    Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the “big tent” vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that “Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]”. Don developed an interest in chemistry at school by “reading a book” - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear

  16. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  17. Equilibrium statistical mechanics of lattice models

    CERN Document Server

    Lavis, David A

    2015-01-01

    Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...

  18. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    Science.gov (United States)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  19. Spherical Process Models for Global Spatial Statistics

    KAUST Repository

    Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.

    2017-01-01

    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture

  20. Model for neural signaling leap statistics

    Energy Technology Data Exchange (ETDEWEB)

    Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.

  1. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  2. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  3. Multi-fragmentation of C{sub 60} induced by {sup 4}He{sup 2+} impact (E<60 keV/amu) and investigated by a multi-correlation technique

    Energy Technology Data Exchange (ETDEWEB)

    Rentenier, A.; Moretto-Capelle, P. E-mail: pmc@irsamc.ups-tlse.fr; Bordenave-Montesquieu, D.; Bordenave-Montesquieu, A

    2003-05-01

    In this communication, the C{sub 60} multi-fragmentation induced by {sup 4}He{sup 2+} ion impact in the 20-240 keV energy range, is investigated. Using a multi-stop time-of-flight technique, it becomes possible to measure partial spectra corresponding to the simultaneous emission of 2-5 light charged fragments; small charged fragments are found to be accompanied by the emission of at least another one. The fragment size distribution depends on the collisional energy and the multiplicity of emitted charged fragments. It is more peaked on small sizes when the collision velocity or the multiplicity increases. Corresponding relative cross sections are also measured; processes with emission of 2 and 3 charged fragments are always dominant but their relative weights decrease slowly when the collision energy increases.

  4. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  5. Improved model for statistical alignment

    Energy Technology Data Exchange (ETDEWEB)

    Miklos, I.; Toroczkai, Z. (Zoltan)

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  6. Daily precipitation statistics in regional climate models

    DEFF Research Database (Denmark)

    Frei, Christoph; Christensen, Jens Hesselbjerg; Déqué, Michel

    2003-01-01

    An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km...

  7. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...

  8. Angular and velocity analysis of the three-fold events in the Xe+Cu reaction at 45 MeV/u

    International Nuclear Information System (INIS)

    Bruno, M.; D'Agostino, M.; Fiandri, M.L.; Fuschini, E.; Manduci, L.; Mastinu, P.F.; Milazzo, P.M.; Gramegna, F.; Ferrero, A.M.J.; Gulminelli, F.; Iori, I.; Moroni, A.; Scardaoni, R.; Buttazzo, P.; Margagliotti, G.V.; Vannini, G.; Auger, G.; Plagnol, E.

    1994-01-01

    An analysis of the angular and velocity distributions of the intermediate mass fragments produced in the reaction Xe+Cu at 45 MeV/u is presented. Events coming from central collisions are selected and compared with predictions of different models based on a statistical deexcitation of an equilibrated source. The angular and velocity correlations show that the experimental production of three nearly-equal mass fragments cannot be explained by a sequential binary decay and is compatible with a multifragmentation mechanism. ((orig.))

  9. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  10. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  11. Adaptive Maneuvering Frequency Method of Current Statistical Model

    Institute of Scientific and Technical Information of China (English)

    Wei Sun; Yongjian Yang

    2017-01-01

    Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.

  12. Speech emotion recognition based on statistical pitch model

    Institute of Scientific and Technical Information of China (English)

    WANG Zhiping; ZHAO Li; ZOU Cairong

    2006-01-01

    A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech.The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85%if the traditional parameters are utilized.

  13. Statistical modelling of citation exchange between statistics journals.

    Science.gov (United States)

    Varin, Cristiano; Cattelan, Manuela; Firth, David

    2016-01-01

    Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

  14. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  16. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  17. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  18. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  19. Models for probability and statistical inference theory and applications

    CERN Document Server

    Stapleton, James H

    2007-01-01

    This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...

  20. Fluctuations and correlations in statistical models of hadron production

    International Nuclear Information System (INIS)

    Gorenstein, M. I.

    2012-01-01

    An extension of the standard concept of the statistical ensembles is suggested. Namely, the statistical ensembles with extensive quantities fluctuating according to an externally given distribution are introduced. Applications in the statistical models of multiple hadron production in high energy physics are discussed.

  1. Growth curve models and statistical diagnostics

    CERN Document Server

    Pan, Jian-Xin

    2002-01-01

    Growth-curve models are generalized multivariate analysis-of-variance models. These models are especially useful for investigating growth problems on short times in economics, biology, medical research, and epidemiology. This book systematically introduces the theory of the GCM with particular emphasis on their multivariate statistical diagnostics, which are based mainly on recent developments made by the authors and their collaborators. The authors provide complete proofs of theorems as well as practical data sets and MATLAB code.

  2. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  3. How to practise Bayesian statistics outside the Bayesian church: What philosophy for Bayesian statistical modelling?

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science

  4. Cellular automata and statistical mechanical models

    International Nuclear Information System (INIS)

    Rujan, P.

    1987-01-01

    The authors elaborate on the analogy between the transfer matrix of usual lattice models and the master equation describing the time development of cellular automata. Transient and stationary properties of probabilistic automata are linked to surface and bulk properties, respectively, of restricted statistical mechanical systems. It is demonstrated that methods of statistical physics can be successfully used to describe the dynamic and the stationary behavior of such automata. Some exact results are derived, including duality transformations, exact mappings, disorder, and linear solutions. Many examples are worked out in detail to demonstrate how to use statistical physics in order to construct cellular automata with desired properties. This approach is considered to be a first step toward the design of fully parallel, probabilistic systems whose computational abilities rely on the cooperative behavior of their components

  5. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  6. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  7. Solar radiation data - statistical analysis and simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Mustacchi, C; Cena, V; Rocchi, M; Haghigat, F

    1984-01-01

    The activities consisted in collecting meteorological data on magnetic tape for ten european locations (with latitudes ranging from 42/sup 0/ to 56/sup 0/ N), analysing the multi-year sequences, developing mathematical models to generate synthetic sequences having the same statistical properties of the original data sets, and producing one or more Short Reference Years (SRY's) for each location. The meteorological parameters examinated were (for all the locations) global + diffuse radiation on horizontal surface, dry bulb temperature, sunshine duration. For some of the locations additional parameters were available, namely, global, beam and diffuse radiation on surfaces other than horizontal, wet bulb temperature, wind velocity, cloud type, cloud cover. The statistical properties investigated were mean, variance, autocorrelation, crosscorrelation with selected parameters, probability density function. For all the meteorological parameters, various mathematical models were built: linear regression, stochastic models of the AR and the DAR type. In each case, the model with the best statistical behaviour was selected for the production of a SRY for the relevant parameter/location.

  8. Study of fragmentation reactions of light nucleus

    International Nuclear Information System (INIS)

    Toneli, David Arruda; Carlson, Brett Vern

    2011-01-01

    Full text: The decay of the compound nucleus is traditionally calculated using a sequential emission model, such as the Weisskopf-Ewing or Hauser-Feshbach ones, in which the compound nucleus decays through a series of residual nuclei by emitting one particle at a time until there is no longer sufficient energy for further emission. In light compound nucleus, however, the excitation energy necessary to fully disintegrate the system is relatively easy to attain. In such cases, decay by simultaneous emission of two or more particles becomes important. A model which takes into account all these decay is the Fermi fragmentation model. Recently, the equivalence between the Fermi fragmentation model and statistical multifragmentation model used to describe the decay for highly excited fragments for reactions of heavy ions was demonstrated. Due the simplicity of the thermodynamic treatment used in the multifragmentation model, we have adapted it to the calculation of Fermi breakup of light nuclei. The ultimate goal of this study is to calculate the distribution of isotopes produced in proton-induced reactions on light nuclei of biological interest, such as C, O e Ca. Although most of these residual nuclei possess extremely short half-lives and thus represent little long-term danger, they tend to be deficient in neutrons and to decay by positron emission, which allows the monitoring of proton radiotherapy by PET (Positron Emission Tomography). (author)

  9. Statistical Model Checking for Biological Systems

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2014-01-01

    Statistical Model Checking (SMC) is a highly scalable simulation-based verification approach for testing and estimating the probability that a stochastic system satisfies a given linear temporal property. The technique has been applied to (discrete and continuous time) Markov chains, stochastic...

  10. Right-sizing statistical models for longitudinal data.

    Science.gov (United States)

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  11. Calculation for fission decay from heavy ion reactions at intermediate energies

    International Nuclear Information System (INIS)

    Blaich, T.; Begemann-Blaich, M.; Fowler, M.M.; Wilhelmy, J.B.; Britt, H.C.; Fields, D.J.; Hansen, L.F.; Namboodiri, M.N.; Sangster, T.C.; Fraenkel, Z.

    1992-01-01

    A detailed deexcitation calculation is presented for target residues resulting from intermediate-energy heavy ion reactions. The model involves an intranuclear cascade, subsequent fast nucleon emission, and final decay by statistical evaporation including fission. Results are compared to data from bombardments with Fe and Nb projectiles on targets of Ta, Au, and Th at 100 MeV/nucleon. The majority of observable features are reproduced with this simple approach, making obvious the need for involving new physical phenomena associated with multifragmentation or other collective dissipation mechanisms

  12. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  13. A statistical model for mapping morphological shape

    Directory of Open Access Journals (Sweden)

    Li Jiahan

    2010-07-01

    Full Text Available Abstract Background Living things come in all shapes and sizes, from bacteria, plants, and animals to humans. Knowledge about the genetic mechanisms for biological shape has far-reaching implications for a range spectrum of scientific disciplines including anthropology, agriculture, developmental biology, evolution and biomedicine. Results We derived a statistical model for mapping specific genes or quantitative trait loci (QTLs that control morphological shape. The model was formulated within the mixture framework, in which different types of shape are thought to result from genotypic discrepancies at a QTL. The EM algorithm was implemented to estimate QTL genotype-specific shapes based on a shape correspondence analysis. Computer simulation was used to investigate the statistical property of the model. Conclusion By identifying specific QTLs for morphological shape, the model developed will help to ask, disseminate and address many major integrative biological and genetic questions and challenges in the genetic control of biological shape and function.

  14. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  15. Statistical shape and appearance models of bones.

    Science.gov (United States)

    Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A

    2014-03-01

    When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Critical exponents in nucleus breakup

    International Nuclear Information System (INIS)

    Campi, X.

    1987-01-01

    In recent years the study of cluster formation has become a new field in statistical physics. Nuclear reactions with particle number change can be viewed as a cluster formation processes. Multifragmentation decay produces a power law distribution of medium size clusters. These two cluster size distributions resemble that of many others statistical cluster formation processes. We discuss now these analogies in some details

  17. Statistical models and NMR analysis of polymer microstructure

    Science.gov (United States)

    Statistical models can be used in conjunction with NMR spectroscopy to study polymer microstructure and polymerization mechanisms. Thus, Bernoullian, Markovian, and enantiomorphic-site models are well known. Many additional models have been formulated over the years for additional situations. Typica...

  18. Decay of hot nuclei produced by relativistic light ions

    International Nuclear Information System (INIS)

    Karnaukhov, V.A.; Avdeev, S.P.; Kuznetsov, V.D.

    1995-01-01

    In collisions of light relativistic projectiles (p, 4 He) with heavy nuclei (Au) very excited target spectators are created, which decay via multiple emission of intermediate mass fragments. It was found that the mean IMF multiplicities are equal (within 15%) to 2.0, 2.6 and 3.0 at proton energies 2.16, 3.6 and 8.1 GeV respectively. These values are comparable with those obtained with heavy ions in the same beam energy range. This is considered to indicate that this observable is not sensitive to the collision dynamics and is determined by the phase space factor. IMF energy spectra are described by the statistical model of multifragmentation neglecting dynamics of the expansion stage before the break up. The expansion velocity is estimated to be ≤ 0.02 c. The mean lifetime of a fragmentating system is found to be ≤ 75 fm/c from IMF-IMF-angular correlations for 4 He (14.6 GeV) +Au collisions. The results support a scenario of true 'thermal' multifragmentation. 26 refs., 10 figs., 1 tab

  19. Workshop on Model Uncertainty and its Statistical Implications

    CERN Document Server

    1988-01-01

    In this book problems related to the choice of models in such diverse fields as regression, covariance structure, time series analysis and multinomial experiments are discussed. The emphasis is on the statistical implications for model assessment when the assessment is done with the same data that generated the model. This is a problem of long standing, notorious for its difficulty. Some contributors discuss this problem in an illuminating way. Others, and this is a truly novel feature, investigate systematically whether sample re-use methods like the bootstrap can be used to assess the quality of estimators or predictors in a reliable way given the initial model uncertainty. The book should prove to be valuable for advanced practitioners and statistical methodologists alike.

  20. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  1. Visualization of the variability of 3D statistical shape models by animation.

    Science.gov (United States)

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  2. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  3. The use of statistical models in heavy-ion reactions studies

    International Nuclear Information System (INIS)

    Stokstad, R.G.

    1984-01-01

    This chapter reviews the use of statistical models to describe nuclear level densities and the decay of equilibrated nuclei. The statistical models of nuclear structure and nuclear reactions presented here have wide application in the analysis of heavy-ion reaction data. Applications are illustrated with examples of gamma-ray decay, the emission of light particles and heavier clusters of nucleons, and fission. In addition to the compound nucleus, the treatment of equilibrated fragments formed in binary reactions is discussed. The statistical model is shown to be an important tool for the identification of products from nonequilibrium decay

  4. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  5. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  6. Active Learning with Statistical Models.

    Science.gov (United States)

    1995-01-01

    Active Learning with Statistical Models ASC-9217041, NSF CDA-9309300 6. AUTHOR(S) David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan 7. PERFORMING...TERMS 15. NUMBER OF PAGES Al, MIT, Artificial Intelligence, active learning , queries, locally weighted 6 regression, LOESS, mixtures of gaussians...COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1522 January 9. 1995 C.B.C.L. Paper No. 110 Active Learning with

  7. Radial expansion and multifragmentation

    International Nuclear Information System (INIS)

    Angelique, J.C.; Bizard, G.; Bougault, R.; Brou, R.; Buta, A.; Colin, J.; Cussol, D.; Durand, D.; Kerambrun, A.; Le Brun, C.; Lecolley, J.F.; Lopez, O.; Louvel, M.; Meslin, C.; Nakagawa, T.; Patry, J.P.; Peter, J.; Popescu, R.; Regimbart, R.; Steckmeyer, J.C.; Tamain, B.; Vient, E.; Yuasa-Nakagawa, K.; Wieloch, A.

    1998-01-01

    . The comparison of the experimental charge distribution with the predictions of a model simulating the statistical de-excitation of a very excited source yields a qualitative agreement

  8. Finite size effects in the intermittency analysis of the fragment-size correlations

    International Nuclear Information System (INIS)

    Bozek, P.; Ploszajczak, M.; Tucholski, A.

    1991-01-01

    An influence of the finite size effect on the fragment-size correlations in the nuclear multifragmentation is studied using the method of scaled factorial moments for a 1 - dim percolation model and for a statistical model of the fragmentation process, which for a certain value of a tuning parameter yields the power-law behaviour of the fragment-size distribution. It is shown that the statistical models of this type contain only repulsive correlations due to the conservation laws. The comparison of the results with those obtained in the non-critical 1 - dim percolation and in the 3 - dim percolation at around the critical point is presented. Correlations in the 1 - dim percolation model are analysed analytically and the mechanism of the attractive correlations in 1 - dim and 3 - dim is identified. (author) 30 refs., 7 figs

  9. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  10. Percolation Model of Nuclear Multifragmentation in High Energy Nucleus-Nucleus Interactions

    International Nuclear Information System (INIS)

    Abdel-Waged, Kh.

    1994-01-01

    A hybrid model based on Reggeon theory inspired model of nuclear distribution, which was successful in explaining the cascading of particles in high energy nucleus-nucleus interactions, and percolation model is proposed. In the framework of this model the yield of the fragment in p + Ag, Au at 350 GeV and C + Ag, Au at 3.6 GeV/nucleon as well as the charge distribution of fragments in Kr, Xe and U interactions with emulsion at ∼ 1 GeV/nucleon is correctly described. 32 refs., 3 figs

  11. Study on Semi-Parametric Statistical Model of Safety Monitoring of Cracks in Concrete Dams

    Directory of Open Access Journals (Sweden)

    Chongshi Gu

    2013-01-01

    Full Text Available Cracks are one of the hidden dangers in concrete dams. The study on safety monitoring models of concrete dam cracks has always been difficult. Using the parametric statistical model of safety monitoring of cracks in concrete dams, with the help of the semi-parametric statistical theory, and considering the abnormal behaviors of these cracks, the semi-parametric statistical model of safety monitoring of concrete dam cracks is established to overcome the limitation of the parametric model in expressing the objective model. Previous projects show that the semi-parametric statistical model has a stronger fitting effect and has a better explanation for cracks in concrete dams than the parametric statistical model. However, when used for forecast, the forecast capability of the semi-parametric statistical model is equivalent to that of the parametric statistical model. The modeling of the semi-parametric statistical model is simple, has a reasonable principle, and has a strong practicality, with a good application prospect in the actual project.

  12. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  13. SoS contract verification using statistical model checking

    Directory of Open Access Journals (Sweden)

    Alessandro Mignogna

    2013-11-01

    Full Text Available Exhaustive formal verification for systems of systems (SoS is impractical and cannot be applied on a large scale. In this paper we propose to use statistical model checking for efficient verification of SoS. We address three relevant aspects for systems of systems: 1 the model of the SoS, which includes stochastic aspects; 2 the formalization of the SoS requirements in the form of contracts; 3 the tool-chain to support statistical model checking for SoS. We adapt the SMC technique for application to heterogeneous SoS. We extend the UPDM/SysML specification language to express the SoS requirements that the implemented strategies over the SoS must satisfy. The requirements are specified with a new contract language specifically designed for SoS, targeting a high-level English- pattern language, but relying on an accurate semantics given by the standard temporal logics. The contracts are verified against the UPDM/SysML specification using the Statistical Model Checker (SMC PLASMA combined with the simulation engine DESYRE, which integrates heterogeneous behavioral models through the functional mock-up interface (FMI standard. The tool-chain allows computing an estimation of the satisfiability of the contracts by the SoS. The results help the system architect to trade-off different solutions to guide the evolution of the SoS.

  14. Multifragmentation and percolation

    International Nuclear Information System (INIS)

    Campi, X.; Desbois, J.

    1985-01-01

    Percolation theory is applied to the problem of nucleus break-up. A model of nuclear percolation is proposed in which the rules for linkage of nucleons to form a cluster are defined in real and momentum spaces. This model exhibits a rather well defined threshold at rho ≅ 0.6. Analytical expressions for cluster size distributions at fixed concentration rho are given. Decay of excited clusters (by evaporation and fission) to give stable nuclear fragments is incorporated. The distribution law for rho in inclusive reactions is studied and the calculated mass yields are compared to experimental results

  15. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  16. A statistical model for porous structure of rocks

    Institute of Scientific and Technical Information of China (English)

    JU Yang; YANG YongMing; SONG ZhenDuo; XU WenJing

    2008-01-01

    The geometric features and the distribution properties of pores in rocks were In-vestigated by means of CT scanning tests of sandstones. The centroidal coordl-nares of pores, the statistic characterristics of pore distance, quantity, size and their probability density functions were formulated in this paper. The Monte Carlo method and the random number generating algorithm were employed to generate two series of random numbers with the desired statistic characteristics and prob-ability density functions upon which the random distribution of pore position, dis-tance and quantity were determined. A three-dimensional porous structural model of sandstone was constructed based on the FLAC3D program and the information of the pore position and distribution that the series of random numbers defined. On the basis of modelling, the Brazil split tests of rock discs were carried out to ex-amine the stress distribution, the pattern of element failure and the inoaculation of failed elements. The simulation indicated that the proposed model was consistent with the realistic porous structure of rock in terms of their statistic properties of pores and geometric similarity. The built-up model disclosed the influence of pores on the stress distribution, failure mode of material elements and the inosculation of failed elements.

  17. A statistical model for porous structure of rocks

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The geometric features and the distribution properties of pores in rocks were in- vestigated by means of CT scanning tests of sandstones. The centroidal coordi- nates of pores, the statistic characterristics of pore distance, quantity, size and their probability density functions were formulated in this paper. The Monte Carlo method and the random number generating algorithm were employed to generate two series of random numbers with the desired statistic characteristics and prob- ability density functions upon which the random distribution of pore position, dis- tance and quantity were determined. A three-dimensional porous structural model of sandstone was constructed based on the FLAC3D program and the information of the pore position and distribution that the series of random numbers defined. On the basis of modelling, the Brazil split tests of rock discs were carried out to ex- amine the stress distribution, the pattern of element failure and the inosculation of failed elements. The simulation indicated that the proposed model was consistent with the realistic porous structure of rock in terms of their statistic properties of pores and geometric similarity. The built-up model disclosed the influence of pores on the stress distribution, failure mode of material elements and the inosculation of failed elements.

  18. (ajst) statistical mechanics model for orientational

    African Journals Online (AJOL)

    Science and Engineering Series Vol. 6, No. 2, pp. 94 - 101. STATISTICAL MECHANICS MODEL FOR ORIENTATIONAL. MOTION OF TWO-DIMENSIONAL RIGID ROTATOR. Malo, J.O. ... there is no translational motion and that they are well separated so .... constant and I is the moment of inertia of a linear rotator. Thus, the ...

  19. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  20. A Review of Modeling Bioelectrochemical Systems: Engineering and Statistical Aspects

    Directory of Open Access Journals (Sweden)

    Shuai Luo

    2016-02-01

    Full Text Available Bioelectrochemical systems (BES are promising technologies to convert organic compounds in wastewater to electrical energy through a series of complex physical-chemical, biological and electrochemical processes. Representative BES such as microbial fuel cells (MFCs have been studied and advanced for energy recovery. Substantial experimental and modeling efforts have been made for investigating the processes involved in electricity generation toward the improvement of the BES performance for practical applications. However, there are many parameters that will potentially affect these processes, thereby making the optimization of system performance hard to be achieved. Mathematical models, including engineering models and statistical models, are powerful tools to help understand the interactions among the parameters in BES and perform optimization of BES configuration/operation. This review paper aims to introduce and discuss the recent developments of BES modeling from engineering and statistical aspects, including analysis on the model structure, description of application cases and sensitivity analysis of various parameters. It is expected to serves as a compass for integrating the engineering and statistical modeling strategies to improve model accuracy for BES development.

  1. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  2. Statistical modelling of transcript profiles of differentially regulated genes

    Directory of Open Access Journals (Sweden)

    Sergeant Martin J

    2008-07-01

    Full Text Available Abstract Background The vast quantities of gene expression profiling data produced in microarray studies, and the more precise quantitative PCR, are often not statistically analysed to their full potential. Previous studies have summarised gene expression profiles using simple descriptive statistics, basic analysis of variance (ANOVA and the clustering of genes based on simple models fitted to their expression profiles over time. We report the novel application of statistical non-linear regression modelling techniques to describe the shapes of expression profiles for the fungus Agaricus bisporus, quantified by PCR, and for E. coli and Rattus norvegicus, using microarray technology. The use of parametric non-linear regression models provides a more precise description of expression profiles, reducing the "noise" of the raw data to produce a clear "signal" given by the fitted curve, and describing each profile with a small number of biologically interpretable parameters. This approach then allows the direct comparison and clustering of the shapes of response patterns between genes and potentially enables a greater exploration and interpretation of the biological processes driving gene expression. Results Quantitative reverse transcriptase PCR-derived time-course data of genes were modelled. "Split-line" or "broken-stick" regression identified the initial time of gene up-regulation, enabling the classification of genes into those with primary and secondary responses. Five-day profiles were modelled using the biologically-oriented, critical exponential curve, y(t = A + (B + CtRt + ε. This non-linear regression approach allowed the expression patterns for different genes to be compared in terms of curve shape, time of maximal transcript level and the decline and asymptotic response levels. Three distinct regulatory patterns were identified for the five genes studied. Applying the regression modelling approach to microarray-derived time course data

  3. Acceleration transforms and statistical kinetic models

    International Nuclear Information System (INIS)

    LuValle, M.J.; Welsher, T.L.; Svoboda, K.

    1988-01-01

    For a restricted class of problems a mathematical model of microscopic degradation processes, statistical kinetics, is developed and linked through acceleration transforms to the information which can be obtained from a system in which the only observable sign of degradation is sudden and catastrophic failure. The acceleration transforms were developed in accelerated life testing applications as a tool for extrapolating from the observable results of an accelerated life test to the dynamics of the underlying degradation processes. A particular concern of a physicist attempting to interpreted the results of an analysis based on acceleration transforms is determining the physical species involved in the degradation process. These species may be (a) relatively abundant or (b) relatively rare. The main results of this paper are a theorem showing that for an important subclass of statistical kinetic models, acceleration transforms cannot be used to distinguish between cases a and b, and an example showing that in some cases falling outside the restrictions of the theorem, cases a and b can be distinguished by their acceleration transforms

  4. Statistical models describing the energy signature of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Thavlov, Anders

    2010-01-01

    Approximately one third of the primary energy production in Denmark is used for heating in buildings. Therefore efforts to accurately describe and improve energy performance of the building mass are very important. For this purpose statistical models describing the energy signature of a building, i...... or varying energy prices. The paper will give an overview of statistical methods and applied models based on experiments carried out in FlexHouse, which is an experimental building in SYSLAB, Risø DTU. The models are of different complexity and can provide estimates of physical quantities such as UA......-values, time constants of the building, and other parameters related to the heat dynamics. A method for selecting the most appropriate model for a given building is outlined and finally a perspective of the applications is given. Aknowledgements to the Danish Energy Saving Trust and the Interreg IV ``Vind i...

  5. STATISTICAL MODELS OF REPRESENTING INTELLECTUAL CAPITAL

    Directory of Open Access Journals (Sweden)

    Andreea Feraru

    2016-06-01

    Full Text Available This article entitled Statistical Models of Representing Intellectual Capital approaches and analyses the concept of intellectual capital, as well as the main models which can support enterprisers/managers in evaluating and quantifying the advantages of intellectual capital. Most authors examine intellectual capital from a static perspective and focus on the development of its various evaluation models. In this chapter we surveyed the classical static models: Sveiby, Edvisson, Balanced Scorecard, as well as the canonical model of intellectual capital. Among the group of static models for evaluating organisational intellectual capital the canonical model stands out. This model enables the structuring of organisational intellectual capital in: human capital, structural capital and relational capital. Although the model is widely spread, it is a static one and can thus create a series of errors in the process of evaluation, because all the three entities mentioned above are not independent from the viewpoint of their contents, as any logic of structuring complex entities requires.

  6. Statistical mechanics of directed models of polymers in the square lattice

    CERN Document Server

    Rensburg, J V

    2003-01-01

    Directed square lattice models of polymers and vesicles have received considerable attention in the recent mathematical and physical sciences literature. These are idealized geometric directed lattice models introduced to study phase behaviour in polymers, and include Dyck paths, partially directed paths, directed trees and directed vesicles models. Directed models are closely related to models studied in the combinatorics literature (and are often exactly solvable). They are also simplified versions of a number of statistical mechanics models, including the self-avoiding walk, lattice animals and lattice vesicles. The exchange of approaches and ideas between statistical mechanics and combinatorics have considerably advanced the description and understanding of directed lattice models, and this will be explored in this review. The combinatorial nature of directed lattice path models makes a study using generating function approaches most natural. In contrast, the statistical mechanics approach would introduce...

  7. Salient features of heavy ion reactions in the intermediate energy region

    International Nuclear Information System (INIS)

    Jakobsson, B.

    1987-01-01

    In this lecture the attention is focused on the most central and therefore generally also the most violent collisions. It is necessary to remember that the non-participating volumes could be very different for symmetric and asymmetric reactions. The onset of the multifragmentation channel or rather the cease of the fusion process is the first topic to be discussed. This question is directly related to the limitation in energy and momentum transfer and thus to the question about nuclear transparency. Exclusive data on multifragmentation on an event-by-event basis, which may help the model constructors, is presented as the second topic. In lecture the onset of fragmentation, fragment sizes in multifragmentation processes, the origin of light particle correlations and emission of pions and kaons close to the threshold are discussed

  8. The epistemology of mathematical and statistical modeling: a quiet methodological revolution.

    Science.gov (United States)

    Rodgers, Joseph Lee

    2010-01-01

    A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the modeling revolution obviated the NHST argument. I begin with a history of NHST and modeling and their relation to one another. Next, I define and illustrate principles involved in developing and evaluating mathematical models. Following, I discuss the difference between using statistical procedures within a rule-based framework and building mathematical models from a scientific epistemology. Only the former is treated carefully in most psychology graduate training. The pedagogical implications of this imbalance and the revised pedagogy required to account for the modeling revolution are described. To conclude, I discuss how attention to modeling implies shifting statistical practice in certain progressive ways. The epistemological basis of statistics has moved away from being a set of procedures, applied mechanistically, and moved toward building and evaluating statistical and scientific models. Copyrigiht 2009 APA, all rights reserved.

  9. Establishing statistical models of manufacturing parameters

    International Nuclear Information System (INIS)

    Senevat, J.; Pape, J.L.; Deshayes, J.F.

    1991-01-01

    This paper reports on the effect of pilgering and cold-work parameters on contractile strain ratio and mechanical properties that were investigated using a large population of Zircaloy tubes. Statistical models were established between: contractile strain ratio and tooling parameters, mechanical properties (tensile test, creep test) and cold-work parameters, and mechanical properties and stress-relieving temperature

  10. Statistical geological discrete fracture network model. Forsmark modelling stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Aaron; La Pointe, Paul [Golder Associates Inc (United States); Simeonov, Assen [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan; Oehman, Johan [Golder Associates AB, Stockholm (Sweden)

    2007-11-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions

  11. Statistical geological discrete fracture network model. Forsmark modelling stage 2.2

    International Nuclear Information System (INIS)

    Fox, Aaron; La Pointe, Paul; Simeonov, Assen; Hermanson, Jan; Oehman, Johan

    2007-11-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions

  12. Statistical Modelling of the Soil Dielectric Constant

    Science.gov (United States)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of

  13. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    Science.gov (United States)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with

  14. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  15. On-the-fly confluence detection for statistical model checking (extended version)

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can only provide sound results if the

  16. Predictions of Quantum Molecular Dynamical Model between incident energy 50 and 1000 MeV/Nucleon

    Directory of Open Access Journals (Sweden)

    Kumar Sanjeev

    2015-01-01

    Full Text Available In the present work, the Quantum Molecular Dynamical (QMD model is summarized as a useful tool for the incident energy range of 50 to 1000 MeV/nucleon in heavy-ion collisions. The model has reproduced the experimental results of various collaborations such as ALADIN, INDRA, PLASTIC BALL and FOPI upto a high level of accuracy for the phenomena like multifragmentation, collective flow as well as elliptical flow in the above prescribed energy range. The efforts are further in the direction to predict the symmetry energy in the wide incident energy range.

  17. Topology for Statistical Modeling of Petascale Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Rojas, Maurice [Texas A & M Univ., College Station, TX (United States)

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  18. Analyzing sickness absence with statistical models for survival data

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Andersen, Per Kragh; Smith-Hansen, Lars

    2007-01-01

    OBJECTIVES: Sickness absence is the outcome in many epidemiologic studies and is often based on summary measures such as the number of sickness absences per year. In this study the use of modern statistical methods was examined by making better use of the available information. Since sickness...... absence data deal with events occurring over time, the use of statistical models for survival data has been reviewed, and the use of frailty models has been proposed for the analysis of such data. METHODS: Three methods for analyzing data on sickness absences were compared using a simulation study...... involving the following: (i) Poisson regression using a single outcome variable (number of sickness absences), (ii) analysis of time to first event using the Cox proportional hazards model, and (iii) frailty models, which are random effects proportional hazards models. Data from a study of the relation...

  19. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  20. Encoding Dissimilarity Data for Statistical Model Building.

    Science.gov (United States)

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  1. Simple classical model for Fano statistics in radiation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, David V. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)], E-mail: David.Jordan@pnl.gov; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; Rene Corrales, L.; Peurrung, Anthony J. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ('bathtub') with a small dipping implement ('shot or whiskey glass'). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the 'Fano effect'). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, 'microscopic' physical models of detector material response to ionizing radiation is discussed.

  2. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  3. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  4. GIA Model Statistics for GRACE Hydrology, Cryosphere, and Ocean Science

    Science.gov (United States)

    Caron, L.; Ivins, E. R.; Larour, E.; Adhikari, S.; Nilsson, J.; Blewitt, G.

    2018-03-01

    We provide a new analysis of glacial isostatic adjustment (GIA) with the goal of assembling the model uncertainty statistics required for rigorously extracting trends in surface mass from the Gravity Recovery and Climate Experiment (GRACE) mission. Such statistics are essential for deciphering sea level, ocean mass, and hydrological changes because the latter signals can be relatively small (≤2 mm/yr water height equivalent) over very large regions, such as major ocean basins and watersheds. With abundant new >7 year continuous measurements of vertical land motion (VLM) reported by Global Positioning System stations on bedrock and new relative sea level records, our new statistical evaluation of GIA uncertainties incorporates Bayesian methodologies. A unique aspect of the method is that both the ice history and 1-D Earth structure vary through a total of 128,000 forward models. We find that best fit models poorly capture the statistical inferences needed to correctly invert for lower mantle viscosity and that GIA uncertainty exceeds the uncertainty ascribed to trends from 14 years of GRACE data in polar regions.

  5. Experimental, statistical, and biological models of radon carcinogenesis

    International Nuclear Information System (INIS)

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig

  6. Nuclear multifragmentation: Basic concepts

    Indian Academy of Sciences (India)

    2014-05-02

    May 2, 2014 ... model [2], isospin-dependent quantum molecular dynamics (IQMD) model ... easier to implement analytically and its main advantage is that one can ... We also have to state which nuclei are included in computing QN0,Z0 (eq.

  7. On experimental and theoretical studies of dynamics and particle production in p-nucleus and heavy ion reactions

    Energy Technology Data Exchange (ETDEWEB)

    Fokin, A.B

    1998-11-01

    Several experiments and theoretical models of intermediate energy heavy ion collision physics are presented in this thesis. Statistical and dynamical aspects of nuclear collisions are widely discussed these days, particularly in connection with the multifragmentation phenomenon and the possible link to a liquid-gas phase transition in the spinodal region of nuclear matter phase diagram. Experimental techniques which allow us to measure various parameters of hot and dense (equilibrated) regions (emission sources) formed in a heavy ion collision are well established nowadays. In recent CHIC (Celsius Heavy Ion Collaboration) experiments the properties of such sources were measured using slowly ramping mode of the CELSIUS storage ring. In this thesis the entropy and chaos production in nuclear collisions is discussed in connection with the t/d/p ratios. Subthreshold pion production explores collective effects in heavy ion collisions and brings additional information about the equation of state of nuclear matter. Continuous pion production excitation functions were measured in the beam energy region from far below the nucleon-nucleon threshold up to the delta dominant region. Mass and angular dependencies of pion production are discussed. A version of the molecular dynamics model which includes pion production in direct nucleon-nucleon collisions was developed and experimental data were analysed in the scope of this model. Properties of the emission sources formed in heavy ion collisions at energies below 50A MeV were studied in the experiments of fragmentation type performed by CHIC. Temperatures of these sources were extracted from fragment energy spectra and from `isotopic effect`. A version of the quantum molecular dynamics model, where the Pauli potential is introduced into the Hamiltonian, was combined with the statistical multifragmentation model and used to explore dynamical and statistical properties of the reaction development. The artificial neural networks

  8. On experimental and theoretical studies of dynamics and particle production in p-nucleus and heavy ion reactions

    International Nuclear Information System (INIS)

    Fokin, A.B.

    1998-11-01

    Several experiments and theoretical models of intermediate energy heavy ion collision physics are presented in this thesis. Statistical and dynamical aspects of nuclear collisions are widely discussed these days, particularly in connection with the multifragmentation phenomenon and the possible link to a liquid-gas phase transition in the spinodal region of nuclear matter phase diagram. Experimental techniques which allow us to measure various parameters of hot and dense (equilibrated) regions (emission sources) formed in a heavy ion collision are well established nowadays. In recent CHIC (Celsius Heavy Ion Collaboration) experiments the properties of such sources were measured using slowly ramping mode of the CELSIUS storage ring. In this thesis the entropy and chaos production in nuclear collisions is discussed in connection with the t/d/p ratios. Subthreshold pion production explores collective effects in heavy ion collisions and brings additional information about the equation of state of nuclear matter. Continuous pion production excitation functions were measured in the beam energy region from far below the nucleon-nucleon threshold up to the delta dominant region. Mass and angular dependencies of pion production are discussed. A version of the molecular dynamics model which includes pion production in direct nucleon-nucleon collisions was developed and experimental data were analysed in the scope of this model. Properties of the emission sources formed in heavy ion collisions at energies below 50A MeV were studied in the experiments of fragmentation type performed by CHIC. Temperatures of these sources were extracted from fragment energy spectra and from 'isotopic effect'. A version of the quantum molecular dynamics model, where the Pauli potential is introduced into the Hamiltonian, was combined with the statistical multifragmentation model and used to explore dynamical and statistical properties of the reaction development. The artificial neural networks

  9. A statistical model for instable thermodynamical systems

    International Nuclear Information System (INIS)

    Sommer, Jens-Uwe

    2003-01-01

    A generic model is presented for statistical systems which display thermodynamic features in contrast to our everyday experience, such as infinite and negative heat capacities. Such system are instable in terms of classical equilibrium thermodynamics. Using our statistical model, we are able to investigate states of instable systems which are undefined in the framework of equilibrium thermodynamics. We show that a region of negative heat capacity in the adiabatic environment, leads to a first order like phase transition when the system is coupled to a heat reservoir. This phase transition takes place without a phase coexistence. Nevertheless, all intermediate states are stable due to fluctuations. When two instable system are brought in thermal contact, the temperature of the composed system is lower than the minimum temperature of the individual systems. Generally, the equilibrium states of instable system cannot be simply decomposed into equilibrium states of the individual systems. The properties of instable system depend on the environment, ensemble equivalence is broken

  10. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  11. MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  12. Statistical Modelling of Resonant Cross Section Structure in URR, Model of the Characteristic Function

    International Nuclear Information System (INIS)

    Koyumdjieva, N.

    2006-01-01

    A statistical model for the resonant cross section structure in the Unresolved Resonance Region has been developed in the framework of the R-matrix formalism in Reich Moore approach with effective accounting of the resonance parameters fluctuations. The model uses only the average resonance parameters and can be effectively applied for analyses of cross sections functional, averaged over many resonances. Those are cross section moments, transmission and self-indication functions measured through thick sample. In this statistical model the resonant cross sections structure is accepted to be periodic and the R-matrix is a function of ε=E/D with period 0≤ε≤N; R nc (ε)=π/2√(S n *S c )1/NΣ(i=1,N)(β in *β ic *ctg[π(ε i - = ε-iS i )/N]; Here S n ,S c ,S i is respectively neutron strength function, strength function for fission or inelastic channel and strength function for radiative capture, N is the number of resonances (ε i ,β i ) that obey the statistic of Porter-Thomas and Wigner's one. The simple case of this statistical model concerns the resonant cross section structure for non-fissile nuclei under the threshold for inelastic scattering - the model of the characteristic function with HARFOR program. In the above model some improvements of calculation of the phases and logarithmic derivatives of neutron channels have been done. In the parameterization we use the free parameter R l ∞ , which accounts the influence of long-distant resonances. The above scheme for statistical modelling of the resonant cross section structure has been applied for evaluation of experimental data for total, capture and inelastic cross sections for 232 Th in the URR (4-150) keV and also the transmission and self-indication functions in (4-175) keV. The set of evaluated average resonance parameters have been obtained. The evaluated average resonance parameters in the URR are consistent with those in the Resolved Resonance Region (CRP for Th-U cycle, Vienna, 2006

  13. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  14. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Palley, M.A.

    1985-01-01

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queries of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.

  15. A no extensive statistical model for the nucleon structure function

    International Nuclear Information System (INIS)

    Trevisan, Luis A.; Mirez, Carlos

    2013-01-01

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  16. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  17. Inclusion of temperature dependence of fission barriers in statistical model calculations

    International Nuclear Information System (INIS)

    Newton, J.O.; Popescu, D.G.; Leigh, J.R.

    1990-08-01

    The temperature dependence of fission barriers has been interpolated from the results of recent theoretical calculations and included in the statistical model code PACE2. It is shown that the inclusion of temperature dependence causes significant changes to the values of the statistical model parameters deduced from fits to experimental data. 21 refs., 2 figs

  18. A Census of Statistics Requirements at U.S. Journalism Programs and a Model for a "Statistics for Journalism" Course

    Science.gov (United States)

    Martin, Justin D.

    2017-01-01

    This essay presents data from a census of statistics requirements and offerings at all 4-year journalism programs in the United States (N = 369) and proposes a model of a potential course in statistics for journalism majors. The author proposes that three philosophies underlie a statistics course for journalism students. Such a course should (a)…

  19. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    Science.gov (United States)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  20. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  1. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    Science.gov (United States)

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  2. Studies of nuclei under the extreme conditions of density, temperature, isospin asymmetry and the phase diagram of hadronic matter

    Energy Technology Data Exchange (ETDEWEB)

    Mekjian, Aram [Rutgers Univ., Piscataway, NJ (United States). Dept. of Physics and Astronomy

    2016-10-18

    The main emphasis of the entire project is on issues having to do with medium energy and ultra-relativistic energy and heavy ion collisions. A major goal of both theory and experiment is to study properties of hot dense nuclear matter under various extreme conditions and to map out the phase diagram in density or chemical potential and temperature. My studies in medium energy nuclear collisions focused on the liquid-gas phase transition and cluster yields from such transitions. Here I developed both the statistical model of nuclear multi-fragmentation and also a mean field theory.

  3. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  4. Statistical modeling of geopressured geothermal reservoirs

    Science.gov (United States)

    Ansari, Esmail; Hughes, Richard; White, Christopher D.

    2017-06-01

    Identifying attractive candidate reservoirs for producing geothermal energy requires predictive models. In this work, inspectional analysis and statistical modeling are used to create simple predictive models for a line drive design. Inspectional analysis on the partial differential equations governing this design yields a minimum number of fifteen dimensionless groups required to describe the physics of the system. These dimensionless groups are explained and confirmed using models with similar dimensionless groups but different dimensional parameters. This study models dimensionless production temperature and thermal recovery factor as the responses of a numerical model. These responses are obtained by a Box-Behnken experimental design. An uncertainty plot is used to segment the dimensionless time and develop a model for each segment. The important dimensionless numbers for each segment of the dimensionless time are identified using the Boosting method. These selected numbers are used in the regression models. The developed models are reduced to have a minimum number of predictors and interactions. The reduced final models are then presented and assessed using testing runs. Finally, applications of these models are offered. The presented workflow is generic and can be used to translate the output of a numerical simulator into simple predictive models in other research areas involving numerical simulation.

  5. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. texreg: Conversion of Statistical Model Output in R to LATEX and HTML Tables

    Directory of Open Access Journals (Sweden)

    Philip Leifeld

    2013-11-01

    Full Text Available A recurrent task in applied statistics is the (mostly manual preparation of model output for inclusion in LATEX, Microsoft Word, or HTML documents usually with more than one model presented in a single table along with several goodness-of-fit statistics. However, statistical models in R have diverse object structures and summary methods, which makes this process cumbersome. This article first develops a set of guidelines for converting statistical model output to LATEX and HTML tables, then assesses to what extent existing packages meet these requirements, and finally presents the texreg package as a solution that meets all of the criteria set out in the beginning. After providing various usage examples, a blueprint for writing custom model extensions is proposed.

  7. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  8. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. A classical statistical model of heavy ion collisions

    International Nuclear Information System (INIS)

    Schmidt, R.; Teichert, J.

    1980-01-01

    The use of the computer code TRAJEC which represents the numerical realization of a classical statistical model for heavy ion collisions is described. The code calculates the results of a classical friction model as well as various multi-differential cross sections for heavy ion collisions. INPUT and OUTPUT information of the code are described. Two examples of data sets are given [ru

  10. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  11. Production of hypernuclei at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Gaitanos, Theodoros; Larionov, Alexei; Mosel, Ulrich [Institut fuer Theoretische Physik, Universitaet Giessen (Germany)

    2011-07-01

    The strangeness sector of the strong interaction is important for our knowledge on, e.g., nuclear astrophysics. It is still a widely debated topic of current research. Hypernuclear production in heavy-ion collisions and anti p-nucleus collisions offers the opportunity to investigate the hyperon-nucleon and hyperon-hyperon interaction inside a hadronic environment in terrestrial laboratories. We study the formation of fragments with and without strangeness contents in the framework of a dynamical transport model (Giessen Boltzmann-Uehling- Uhlenbeck, GiBUU) and a statistical approach (Statistical Multifragmentation Model, SMM) of fragment formation. We use a coalescence picture for the production of single-{Lambda} and double-{Lambda} hypernuclei, and provide theoretical estimates on their spectra and inclusive cross sections in heavy-ion collisions and anti p induced reactions, relevant for HypHI and PANDA experiments at FAIR.

  12. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    Science.gov (United States)

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric

  13. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting pa...

  14. The Statistical Modeling of the Trends Concerning the Romanian Population

    Directory of Open Access Journals (Sweden)

    Gabriela OPAIT

    2014-11-01

    Full Text Available This paper reflects the statistical modeling concerning the resident population in Romania, respectively the total of the romanian population, through by means of the „Least Squares Method”. Any country it develops by increasing of the population, respectively of the workforce, which is a factor of influence for the growth of the Gross Domestic Product (G.D.P.. The „Least Squares Method” represents a statistical technique for to determine the trend line of the best fit concerning a model.

  15. Nuclear fragmentation in central collisions: Ni + Au from 32 to 90 A*MeV

    International Nuclear Information System (INIS)

    Bellaize, N.

    2000-01-01

    Heavy ion collisions are one of tools for studying nuclear system far away from its equilibrium state. This work concerns the most violent collisions in the Ni + Au system for incident energies ranging from 32 up to 90 AMeV. These events were detected with the multidetector INDRA and selected by the Principal Component Analysis (multidimensional analysis). This method classifies the events according their detection features and their degree of dissipation. We observed two deexcitation mechanisms: a fusion/fission - evaporation process and a multifragmentation process. Those two coexist from 32 to 52 AMeV whereas only one subsists at 90 AMeV. For those two mechanisms, an component was observed which seems to be linked to the initial phase of the reaction. The energy fluctuations of this component leads to variations in the energy deposit which determines the deexcitation of the system. The experimental multifragmentation data of the Ni + Au system (52 and 90 AMeV) were compared to the predictions of a statistical model and to the experimental data of the system Xe + Sn at 50 AMeV (also detected with INDRA). These comparisons show the lack of collective radial energy for fragments (Z≥10) in the Ni + Au system, and show that the degree of multifragmentation depends of the thermal excitation energy. Mean kinetic energies of particles and lights fragments (Z≥10) are larger in the Ni + Au system than the Xe + Sn system. This observation shows that these particles are more sensitive to the entrance channel for an asymmetric system than for a symmetric system (for the same number of nucleons). (author)

  16. Nuclear fragmentation in central collisions: Ni + Au from 32 to 90 A*MeV; Fragmentation dans les collisions centrales du systeme Ni + Au de 32 a 90 A MeV

    Energy Technology Data Exchange (ETDEWEB)

    Bellaize, N

    2000-11-03

    Heavy ion collisions are one of tools for studying nuclear system far away from its equilibrium state. This work concerns the most violent collisions in the Ni + Au system for incident energies ranging from 32 up to 90 AMeV. These events were detected with the multidetector INDRA and selected by the Principal Component Analysis (multidimensional analysis). This method classifies the events according their detection features and their degree of dissipation. We observed two deexcitation mechanisms: a fusion/fission - evaporation process and a multifragmentation process. Those two coexist from 32 to 52 AMeV whereas only one subsists at 90 AMeV. For those two mechanisms, an component was observed which seems to be linked to the initial phase of the reaction. The energy fluctuations of this component leads to variations in the energy deposit which determines the deexcitation of the system. The experimental multifragmentation data of the Ni + Au system (52 and 90 AMeV) were compared to the predictions of a statistical model and to the experimental data of the system Xe + Sn at 50 AMeV (also detected with INDRA). These comparisons show the lack of collective radial energy for fragments (Z{>=}10) in the Ni + Au system, and show that the degree of multifragmentation depends of the thermal excitation energy. Mean kinetic energies of particles and lights fragments (Z{>=}10) are larger in the Ni + Au system than the Xe + Sn system. This observation shows that these particles are more sensitive to the entrance channel for an asymmetric system than for a symmetric system (for the same number of nucleons). (author)

  17. Sound statistical model checking for MDP using partial order and confluence reduction

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking (SMC) is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can in general only provide sound

  18. Statistical modeling to support power system planning

    Science.gov (United States)

    Staid, Andrea

    This dissertation focuses on data-analytic approaches that improve our understanding of power system applications to promote better decision-making. It tackles issues of risk analysis, uncertainty management, resource estimation, and the impacts of climate change. Tools of data mining and statistical modeling are used to bring new insight to a variety of complex problems facing today's power system. The overarching goal of this research is to improve the understanding of the power system risk environment for improved operation, investment, and planning decisions. The first chapter introduces some challenges faced in planning for a sustainable power system. Chapter 2 analyzes the driving factors behind the disparity in wind energy investments among states with a goal of determining the impact that state-level policies have on incentivizing wind energy. Findings show that policy differences do not explain the disparities; physical and geographical factors are more important. Chapter 3 extends conventional wind forecasting to a risk-based focus of predicting maximum wind speeds, which are dangerous for offshore operations. Statistical models are presented that issue probabilistic predictions for the highest wind speed expected in a three-hour interval. These models achieve a high degree of accuracy and their use can improve safety and reliability in practice. Chapter 4 examines the challenges of wind power estimation for onshore wind farms. Several methods for wind power resource assessment are compared, and the weaknesses of the Jensen model are demonstrated. For two onshore farms, statistical models outperform other methods, even when very little information is known about the wind farm. Lastly, chapter 5 focuses on the power system more broadly in the context of the risks expected from tropical cyclones in a changing climate. Risks to U.S. power system infrastructure are simulated under different scenarios of tropical cyclone behavior that may result from climate

  19. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  20. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  1. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  2. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  3. Logarithmic transformed statistical models in calibration

    International Nuclear Information System (INIS)

    Zeis, C.D.

    1975-01-01

    A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)

  4. Development of a statistical shape model of multi-organ and its performance evaluation

    International Nuclear Information System (INIS)

    Nakada, Misaki; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2010-01-01

    Existing statistical shape modeling methods for an organ can not take into account the correlation between neighboring organs. This study focuses on a level set distribution model and proposes two modeling methods for multiple organs that can take into account the correlation between neighboring organs. The first method combines level set functions of multiple organs into a vector. Subsequently it analyses the distribution of the vectors of a training dataset by a principal component analysis and builds a multiple statistical shape model. Second method constructs a statistical shape model for each organ independently and assembles component scores of different organs in a training dataset so as to generate a vector. It analyses the distribution of the vectors of to build a statistical shape model of multiple organs. This paper shows results of applying the proposed methods trained by 15 abdominal CT volumes to unknown 8 CT volumes. (author)

  5. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  6. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  7. Statistical models of petrol engines vehicles dynamics

    Science.gov (United States)

    Ilie, C. O.; Marinescu, M.; Alexa, O.; Vilău, R.; Grosu, D.

    2017-10-01

    This paper focuses on studying statistical models of vehicles dynamics. It was design and perform a one year testing program. There were used many same type cars with gasoline engines and different mileage. Experimental data were collected of onboard sensors and those on the engine test stand. A database containing data of 64th tests was created. Several mathematical modelling were developed using database and the system identification method. Each modelling is a SISO or a MISO linear predictive ARMAX (AutoRegressive-Moving-Average with eXogenous inputs) model. It represents a differential equation with constant coefficients. It were made 64th equations for each dependency like engine torque as output and engine’s load and intake manifold pressure, as inputs. There were obtained strings with 64 values for each type of model. The final models were obtained using average values of the coefficients. The accuracy of models was assessed.

  8. The origin of nuclear spin and its effect durning intermediate energy heavy ion collisions

    International Nuclear Information System (INIS)

    Zhang Guoqiang; Cao Xiguang; Fu Yao; Ma Yugang; Cai Xiangzhou; Wang Hongwei; Fang Deqing; Tian Wendong; Chen Jingen; Guo Wei; Liu Guihua

    2010-01-01

    We use the heavy-ion phase-space exploration (HIPSE) model to discuss the origin of the nuclear spin and its effect in Intermediate energy nuclear reaction. It is found that the spin of projectile depends on the impact parameter of the reaction system heavily, while on the violence lightly by contrast. Some interesting multifragmentation phenomena related to the spin are shown, especially those of phase transition. At the same time, the role of excited energy for multifragmentation is also invested. We find the later plays a more robust role durning the nuclear disintegration. (authors)

  9. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    International Nuclear Information System (INIS)

    Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio

    2011-01-01

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  10. Statistical modeling of static strengths of nuclear graphites with relevance to structural design

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-02-01

    Use of graphite materials for structural members poses a problem as to how to take into account of statistical properties of static strength, especially tensile fracture stresses, in component structural design. The present study concerns comprehensive examinations on statistical data base and modelings on nuclear graphites. First, the report provides individual samples and their analyses on strengths of IG-110 and PGX graphites for HTTR components. Those statistical characteristics on other HTGR graphites are also exemplified from the literature. Most of statistical distributions of individual samples are found to be approximately normal. The goodness of fit to normal distributions is more satisfactory with larger sample sizes. Molded and extruded graphites, however, possess a variety of statistical properties depending of samples from different with-in-log locations and/or different orientations. Second, the previous statistical models including the Weibull theory are assessed from the viewpoint of applicability to design procedures. This leads to a conclusion that the Weibull theory and its modified ones are satisfactory only for limited parts of tensile fracture behavior. They are not consistent for whole observations. Only normal statistics are justifiable as practical approaches to discuss specified minimum ultimate strengths as statistical confidence limits for individual samples. Third, the assessment of various statistical models emphasizes the need to develop advanced analytical ones which should involve modeling of microstructural features of actual graphite materials. Improvements of other structural design methodologies are also presented. (author)

  11. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    International Nuclear Information System (INIS)

    Lovejoy, S.; Lima, M. I. P. de

    2015-01-01

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time

  12. Monitor-Based Statistical Model Checking for Weighted Metric Temporal Logic

    DEFF Research Database (Denmark)

    Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand

    2012-01-01

    We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction with desi......We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction...

  13. Statistical Model of the 2001 Czech Census for Interactive Presentation

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Hora, Jan; Boček, Pavel; Somol, Petr; Pudil, Pavel

    Vol. 26, č. 4 (2010), s. 1-23 ISSN 0282-423X R&D Projects: GA ČR GA102/07/1594; GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Interactive statistical model * census data presentation * distribution mixtures * data modeling * EM algorithm * incomplete data * data reproduction accuracy * data mining Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.492, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/grim-0350513.pdf

  14. A statistical mechanical model of economics

    Science.gov (United States)

    Lubbers, Nicholas Edward Williams

    Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex

  15. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning ...

  16. Statistically Modeling I-V Characteristics of CNT-FET with LASSO

    Science.gov (United States)

    Ma, Dongsheng; Ye, Zuochang; Wang, Yan

    2017-08-01

    With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.

  17. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems.

    Science.gov (United States)

    Sapsis, Themistoklis P; Majda, Andrew J

    2013-08-20

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.

  18. Statistical model for prediction of hearing loss in patients receiving cisplatin chemotherapy.

    Science.gov (United States)

    Johnson, Andrew; Tarima, Sergey; Wong, Stuart; Friedland, David R; Runge, Christina L

    2013-03-01

    This statistical model might be used to predict cisplatin-induced hearing loss, particularly in patients undergoing concomitant radiotherapy. To create a statistical model based on pretreatment hearing thresholds to provide an individual probability for hearing loss from cisplatin therapy and, secondarily, to investigate the use of hearing classification schemes as predictive tools for hearing loss. Retrospective case-control study. Tertiary care medical center. A total of 112 subjects receiving chemotherapy and audiometric evaluation were evaluated for the study. Of these subjects, 31 met inclusion criteria for analysis. The primary outcome measurement was a statistical model providing the probability of hearing loss following the use of cisplatin chemotherapy. Fifteen of the 31 subjects had significant hearing loss following cisplatin chemotherapy. American Academy of Otolaryngology-Head and Neck Society and Gardner-Robertson hearing classification schemes revealed little change in hearing grades between pretreatment and posttreatment evaluations for subjects with or without hearing loss. The Chang hearing classification scheme could effectively be used as a predictive tool in determining hearing loss with a sensitivity of 73.33%. Pretreatment hearing thresholds were used to generate a statistical model, based on quadratic approximation, to predict hearing loss (C statistic = 0.842, cross-validated = 0.835). The validity of the model improved when only subjects who received concurrent head and neck irradiation were included in the analysis (C statistic = 0.91). A calculated cutoff of 0.45 for predicted probability has a cross-validated sensitivity and specificity of 80%. Pretreatment hearing thresholds can be used as a predictive tool for cisplatin-induced hearing loss, particularly with concomitant radiotherapy.

  19. Statistics of a neuron model driven by asymmetric colored noise.

    Science.gov (United States)

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  20. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    Science.gov (United States)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  1. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  2. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    Science.gov (United States)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  3. Model selection for contingency tables with algebraic statistics

    NARCIS (Netherlands)

    Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.

    2009-01-01

    Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of

  4. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  5. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    Science.gov (United States)

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  6. Probing the exchange statistics of one-dimensional anyon models

    Science.gov (United States)

    Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis

    2018-05-01

    We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.

  7. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  8. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  9. Automatic generation of statistical pose and shape models for articulated joints.

    Science.gov (United States)

    Xin Chen; Graham, Jim; Hutchinson, Charles; Muir, Lindsay

    2014-02-01

    Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of 0.34 ±0.27 mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity.

  10. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    Science.gov (United States)

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  11. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  12. Multimesonic decays of charmonium states in the statistical quark model

    International Nuclear Information System (INIS)

    Montvay, I.; Toth, J.D.

    1978-01-01

    The data known at present of multimesonic decays of chi and psi states are fitted in a statistical quark model, in which the matrix elements are assumed to be constant and resonances as well as both strong and second order electromagnetic processes are taken into account. The experimental data are well reproduced by the model. Unknown branching ratios for the rest of multimesonic channels are predicted. The fit leaves about 40% for baryonic and radiative channels in the case of J/psi(3095). The fitted parameters of the J/psi decays are used to predict the mesonic decays of the pseudoscalar eta c. The statistical quark model seems to allow the calculation of competitive multiparticle processes for the studied decays. (D.P.)

  13. Uniting statistical and individual-based approaches for animal movement modelling.

    Science.gov (United States)

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.

  14. Statistical inference to advance network models in epidemiology.

    Science.gov (United States)

    Welch, David; Bansal, Shweta; Hunter, David R

    2011-03-01

    Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  16. Model-independent tracking of criticality signals in nuclear multifragmentation date

    International Nuclear Information System (INIS)

    Frankland, J.D.; Chbihi, A.

    2003-01-01

    We study multifragment production in central heavy-ion collisions using model-independent universal fluctuations theory and a wide range of INDRA data for central collisions of symmetric systems of total mass A: 75 - 400 at bombarding energies from 25 to 100 MeV/nucleon. We find evidence for two different regimes at low and high incident energies, respectively, defined by the fluctuation scaling properties of the largest fragment in each event, Z(max), which plays the role of an order parameter. Data for a wide range of system masses and incident energies collapse on to an approximately universal scaling function in each regime. The form of the scaling functions is established, and their dependence on total system mass and bombarding energy is mapped out. (authors)

  17. Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach

    CERN Document Server

    Davey, Adam

    2009-01-01

    Statistical power analysis has revolutionized the ways in which we conduct and evaluate research.  Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling.  It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types

  18. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  19. Statistical Modeling of Energy Production by Photovoltaic Farms

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Pelikán, Emil; Krč, Pavel; Eben, Kryštof; Musílek, P.

    2011-01-01

    Roč. 5, č. 9 (2011), s. 785-793 ISSN 1934-8975 Grant - others:GA AV ČR(CZ) M100300904 Institutional research plan: CEZ:AV0Z10300504 Keywords : electrical energy * solar energy * numerical weather prediction model * nonparametric regression * beta regression Subject RIV: BB - Applied Statistics, Operational Research

  20. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  1. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  2. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  3. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  4. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    Science.gov (United States)

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  5. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  6. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    Science.gov (United States)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  7. Effect of model choice and sample size on statistical tolerance limits

    International Nuclear Information System (INIS)

    Duran, B.S.; Campbell, K.

    1980-03-01

    Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations

  8. Statistical properties of the nuclear shell-model Hamiltonian

    International Nuclear Information System (INIS)

    Dias, H.; Hussein, M.S.; Oliveira, N.A. de

    1986-01-01

    The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt

  9. On the statistical comparison of climate model output and climate data

    International Nuclear Information System (INIS)

    Solow, A.R.

    1991-01-01

    Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question. 9 refs

  10. On the statistical comparison of climate model output and climate data

    International Nuclear Information System (INIS)

    Solow, A.R.

    1990-01-01

    Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question

  11. Statistical Models to Assess the Health Effects and to Forecast Ground Level Ozone

    Czech Academy of Sciences Publication Activity Database

    Schlink, U.; Herbath, O.; Richter, M.; Dorling, S.; Nunnari, G.; Cawley, G.; Pelikán, Emil

    2006-01-01

    Roč. 21, č. 4 (2006), s. 547-558 ISSN 1364-8152 R&D Projects: GA AV ČR 1ET400300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : statistical models * ground level ozone * health effects * logistic model * forecasting * prediction performance * neural network * generalised additive model * integrated assessment Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.992, year: 2006

  12. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  13. On an uncorrelated jet model with Bose-Einstein statistics

    International Nuclear Information System (INIS)

    Bilic, N.; Dadic, I.; Martinis, M.

    1978-01-01

    Starting from the density of states of an ideal Bose-Einstein gas, an uncorrelated jet model with Bose-Einstein statistics has been formulated. The transition to continuum is based on the Touschek invariant measure. It has been shown that in this model average multiplicity increases logarithmically with total energy, while the inclusive distribution shows ln s violation of scaling. (author)

  14. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    NARCIS (Netherlands)

    Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.

    2017-01-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood

  15. A simple statistical model for geomagnetic reversals

    Science.gov (United States)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  16. Vortex dynamics and Lagrangian statistics in a model for active turbulence.

    Science.gov (United States)

    James, Martin; Wilczek, Michael

    2018-02-14

    Cellular suspensions such as dense bacterial flows exhibit a turbulence-like phase under certain conditions. We study this phenomenon of "active turbulence" statistically by using numerical tools. Following Wensink et al. (Proc. Natl. Acad. Sci. U.S.A. 109, 14308 (2012)), we model active turbulence by means of a generalized Navier-Stokes equation. Two-point velocity statistics of active turbulence, both in the Eulerian and the Lagrangian frame, is explored. We characterize the scale-dependent features of two-point statistics in this system. Furthermore, we extend this statistical study with measurements of vortex dynamics in this system. Our observations suggest that the large-scale statistics of active turbulence is close to Gaussian with sub-Gaussian tails.

  17. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    Science.gov (United States)

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  18. Discrete ellipsoidal statistical BGK model and Burnett equations

    Science.gov (United States)

    Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei

    2018-06-01

    A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.

  19. Eigenfunction statistics for Anderson model with Hölder continuous ...

    Indian Academy of Sciences (India)

    The Institute of Mathematical Sciences, Taramani, Chennai 600 113, India ... Anderson model; Hölder continuous measure; Poisson statistics. ...... [4] Combes J-M, Hislop P D and Klopp F, An optimal Wegner estimate and its application to.

  20. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  1. Dynamical scenario of intermediary mass fragments formation in heavy ion collisions

    International Nuclear Information System (INIS)

    Ayik, S.; Belkacem, M.; Gregoire, C.; Stryjewski, J.; Suraud, E.

    1989-01-01

    We briefly remind the possible dynamical scenario of fragments formation in heavy-ion collisions at some tens fo MeV/A. We discuss how present day dynamical models can describe fragment formation. We next turn to the Boltzmann-Langevin formalism which provides a well defined theoretical framework for the understanding of the growing of the dynamical instabilities leading to multifragmentation. We present a first numerical solution of the Boltzmann-Langevin equation and we apply the formalism to the onset of multifragmentation of the 40 Ca + 40 Ca system between 20 and 60 MeV/A beam energy [fr

  2. Particles and nuclei, letters

    International Nuclear Information System (INIS)

    2000-01-01

    The present collection of letters from JINR, Dubna, contains eight separate records on quantum field theory and symmetries in nuclear physics, multifractal analysis of AFM images of Nb thin film surfaces, the fast-acting memory for multichannel converters of time to digital, an analysis of the anomalous Cherenkov radiation obtained in the relativistic lead ion beam at CERN SPS, the problem of consistency of the thermal-spike model with experimentally determined electron temperature, ATLAS calorimeter performance for charged pion as well as on collective flow in multifragmentation induced by relativistic helium and carbon ions variation of the coulomb repulsion in multifragmentation

  3. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  4. Spatio-temporal statistical models with applications to atmospheric processes

    International Nuclear Information System (INIS)

    Wikle, C.K.

    1996-01-01

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model

  5. Statistical mechanics of directed models of polymers in the square lattice

    International Nuclear Information System (INIS)

    Rensburg, E J Janse van

    2003-01-01

    Directed square lattice models of polymers and vesicles have received considerable attention in the recent mathematical and physical sciences literature. These are idealized geometric directed lattice models introduced to study phase behaviour in polymers, and include Dyck paths, partially directed paths, directed trees and directed vesicles models. Directed models are closely related to models studied in the combinatorics literature (and are often exactly solvable). They are also simplified versions of a number of statistical mechanics models, including the self-avoiding walk, lattice animals and lattice vesicles. The exchange of approaches and ideas between statistical mechanics and combinatorics have considerably advanced the description and understanding of directed lattice models, and this will be explored in this review. The combinatorial nature of directed lattice path models makes a study using generating function approaches most natural. In contrast, the statistical mechanics approach would introduce partition functions and free energies, and then investigate these using the general framework of critical phenomena. Generating function and statistical mechanics approaches are closely related. For example, questions regarding the limiting free energy may be approached by considering the radius of convergence of a generating function, and the scaling properties of thermodynamic quantities are related to the asymptotic properties of the generating function. In this review the methods for obtaining generating functions and determining free energies in directed lattice path models of linear polymers is presented. These methods include decomposition methods leading to functional recursions, as well as the Temperley method (that is implemented by creating a combinatorial object, one slice at a time). A constant term formulation of the generating function will also be reviewed. The thermodynamic features and critical behaviour in models of directed paths may be

  6. Interpretation of the results of statistical measurements. [search for basic probability model

    Science.gov (United States)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  7. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  8. Statistics of excitations in the electron glass model

    Science.gov (United States)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  9. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks

    Directory of Open Access Journals (Sweden)

    Luciano Pivoto Specht

    2007-03-01

    Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.

  10. ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING

    OpenAIRE

    Palas Roy; Naba Kumar Mondal; Biswajit Das; Kousik Das

    2013-01-01

    High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India) has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Mul...

  11. Modeling the basic superconductor thermodynamical-statistical characteristics

    International Nuclear Information System (INIS)

    Palenskis, V.; Maknys, K.

    1999-01-01

    In accordance with the Landau second-order phase transition and other thermodynamical-statistical relations for superconductors, and using the energy gap as an order parameter in the electron free energy presentation, the fundamental characteristics of electrons, such as the free energy, the total energy, the energy gap, the entropy, and the heat capacity dependences on temperature were obtained. The obtained modeling results, in principle, well reflect the basic low- and high-temperature superconductor characteristics

  12. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  13. Appplication of statistical mechanical methods to the modeling of social networks

    Science.gov (United States)

    Strathman, Anthony Robert

    With the recent availability of large-scale social data sets, social networks have become open to quantitative analysis via the methods of statistical physics. We examine the statistical properties of a real large-scale social network, generated from cellular phone call-trace logs. We find this network, like many other social networks to be assortative (r = 0.31) and clustered (i.e., strongly transitive, C = 0.21). We measure fluctuation scaling to identify the presence of internal structure in the network and find that structural inhomogeneity effectively disappears at the scale of a few hundred nodes, though there is no sharp cutoff. We introduce an agent-based model of social behavior, designed to model the formation and dissolution of social ties. The model is a modified Metropolis algorithm containing agents operating under the basic sociological constraints of reciprocity, communication need and transitivity. The model introduces the concept of a social temperature. We go on to show that this simple model reproduces the global statistical network features (incl. assortativity, connected fraction, mean degree, clustering, and mean shortest path length) of the real network data and undergoes two phase transitions, one being from a "gas" to a "liquid" state and the second from a liquid to a glassy state as function of this social temperature.

  14. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  15. Statistical 3D damage accumulation model for ion implant simulators

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided

  16. Statistical modelling of space-time processes with application to wind power

    DEFF Research Database (Denmark)

    Lenzi, Amanda

    . This thesis aims at contributing to the wind power literature by building and evaluating new statistical techniques for producing forecasts at multiple locations and lead times using spatio-temporal information. By exploring the features of a rich portfolio of wind farms in western Denmark, we investigate...... propose spatial models for predicting wind power generation at two different time scales: for annual average wind power generation and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial...

  17. Modelling unsupervised online-learning of artificial grammars: linking implicit and statistical learning.

    Science.gov (United States)

    Rohrmeier, Martin A; Cross, Ian

    2014-07-01

    Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Statistical power of model selection strategies for genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Zheyang Wu

    2009-07-01

    Full Text Available Genome-wide association studies (GWAS aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the

  19. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  20. Two-dimensional models in statistical mechanics and field theory

    International Nuclear Information System (INIS)

    Koberle, R.

    1980-01-01

    Several features of two-dimensional models in statistical mechanics and Field theory, such as, lattice quantum chromodynamics, Z(N), Gross-Neveu and CP N-1 are discussed. The problems of confinement and dynamical mass generation are also analyzed. (L.C.) [pt

  1. Syntactic discriminative language model rerankers for statistical machine translation

    NARCIS (Netherlands)

    Carter, S.; Monz, C.

    2011-01-01

    This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical

  2. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  3. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  4. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  5. Dynamics of density fluctuations in a non-Markovian Boltzmann- Langevin model

    International Nuclear Information System (INIS)

    Ayik, S.

    1996-01-01

    In the course of the past few years, the nuclear Boltzmann-Langevin (BL)model has emerged as a promising microscopic model for nuclear dynamics at intermediate energies. The BL model goes beyond the much employed Boltzmann-Uehling-Uhlenbeck (BUU) model, and hence it provides a basis for describing dynamics of density fluctuations and addressing processes exhibiting spontaneous symmetry breaking and catastrophic transformations in nuclear collisions, such as induced fission and multifragmentation. In these standard models, the collision term is treated in a Markovian approximation by assuming that two-body collisions are local in both space and time, in accordance with Boltzmann's original treatment. This simplification is usually justified by the fact that the duration of a two-body collision is short on the time scale characteristic of the macroscopic evolution of the system. As a result, transport properties of the collective motion has then a classical character. However, when the system possesses fast collective modes with characteristic energies that are not small in comparision with the temperature, then the quantum-statistical effects are important and the standard Markovian treatment is inadequate. In this case, it is necessary to improve the one-body transport model by including the memory effect due to the finite duration of two-body collisions. First we briefly describe the non-Markovian extension of the BL model by including the finite memory time associated with two-body collisions. Then, using this non-Markovian model in a linear response framework, we investigate the effect of the memory time on the agitation of unstable modes in nuclear matter in the spinodal zone, and calculate the collisional relaxation rates of nuclear collective vibrations

  6. A statistical mechanics model for free-for-all airplane passenger boarding

    Science.gov (United States)

    Steffen, Jason H.

    2008-12-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics, where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.

  7. A statistical mechanics model for free-for-all airplane passenger boarding

    International Nuclear Information System (INIS)

    Steffen, Jason H.; Fermilab

    2008-01-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty

  8. A statistical mechanics model for free-for-all airplane passenger boarding

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab

    2008-08-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.

  9. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  10. A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.

    Science.gov (United States)

    Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong

    2017-06-01

    Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state

  11. Bridging Weighted Rules and Graph Random Walks for Statistical Relational Models

    Directory of Open Access Journals (Sweden)

    Seyed Mehran Kazemi

    2018-02-01

    Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.

  12. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  13. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  14. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  15. Hunting Solomonoff's Swans: Exploring the Boundary Between Physics and Statistics in Hydrological Modeling

    Science.gov (United States)

    Nearing, G. S.

    2014-12-01

    Statistical models consistently out-perform conceptual models in the short term, however to account for a nonstationary future (or an unobserved past) scientists prefer to base predictions on unchanging and commutable properties of the universe - i.e., physics. The problem with physically-based hydrology models is, of course, that they aren't really based on physics - they are based on statistical approximations of physical interactions, and we almost uniformly lack an understanding of the entropy associated with these approximations. Thermodynamics is successful precisely because entropy statistics are computable for homogeneous (well-mixed) systems, and ergodic arguments explain the success of Newton's laws to describe systems that are fundamentally quantum in nature. Unfortunately, similar arguments do not hold for systems like watersheds that are heterogeneous at a wide range of scales. Ray Solomonoff formalized the situation in 1968 by showing that given infinite evidence, simultaneously minimizing model complexity and entropy in predictions always leads to the best possible model. The open question in hydrology is about what happens when we don't have infinite evidence - for example, when the future will not look like the past, or when one watershed does not behave like another. How do we isolate stationary and commutable components of watershed behavior? I propose that one possible answer to this dilemma lies in a formal combination of physics and statistics. In this talk I outline my recent analogue (Solomonoff's theorem was digital) of Solomonoff's idea that allows us to quantify the complexity/entropy tradeoff in a way that is intuitive to physical scientists. I show how to formally combine "physical" and statistical methods for model development in a way that allows us to derive the theoretically best possible model given any given physics approximation(s) and available observations. Finally, I apply an analogue of Solomonoff's theorem to evaluate the

  16. A statistical model for interpreting computerized dynamic posturography data

    Science.gov (United States)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  17. Using statistical compatibility to derive advanced probabilistic fatigue models

    Czech Academy of Sciences Publication Activity Database

    Fernández-Canteli, A.; Castillo, E.; López-Aenlle, M.; Seitl, Stanislav

    2010-01-01

    Roč. 2, č. 1 (2010), s. 1131-1140 E-ISSN 1877-7058. [Fatigue 2010. Praha, 06.06.2010-11.06.2010] Institutional research plan: CEZ:AV0Z20410507 Keywords : Fatigue models * Statistical compatibility * Functional equations Subject RIV: JL - Materials Fatigue, Friction Mechanics

  18. Optimizing refiner operation with statistical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)

    1997-02-01

    The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.

  19. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    Science.gov (United States)

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  20. Non-Gaussianity and statistical anisotropy from vector field populated inflationary models

    CERN Document Server

    Dimastrogiovanni, Emanuela; Matarrese, Sabino; Riotto, Antonio

    2010-01-01

    We present a review of vector field models of inflation and, in particular, of the statistical anisotropy and non-Gaussianity predictions of models with SU(2) vector multiplets. Non-Abelian gauge groups introduce a richer amount of predictions compared to the Abelian ones, mostly because of the presence of vector fields self-interactions. Primordial vector fields can violate isotropy leaving their imprint in the comoving curvature fluctuations zeta at late times. We provide the analytic expressions of the correlation functions of zeta up to fourth order and an analysis of their amplitudes and shapes. The statistical anisotropy signatures expected in these models are important and, potentially, the anisotropic contributions to the bispectrum and the trispectrum can overcome the isotropic parts.

  1. Computational algebraic geometry for statistical modeling FY09Q2 progress.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre

    2009-03-01

    This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones in more detail; the next section provides an overview of the project and how the current progress fits into it.

  2. Development of 3D statistical mandible models for cephalometric measurements

    International Nuclear Information System (INIS)

    Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Hong, Helen; Yoo, Ji Hyun

    2012-01-01

    The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.

  3. Development of 3D statistical mandible models for cephalometric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il [School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Hong, Helen; Yoo, Ji Hyun [Division of Multimedia Engineering, Seoul Women' s University, Seoul (Korea, Republic of)

    2012-09-15

    The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.

  4. A Statistical Model for Synthesis of Detailed Facial Geometry

    OpenAIRE

    Golovinskiy, Aleksey; Matusik, Wojciech; Pfister, Hanspeter; Rusinkiewicz, Szymon; Funkhouser, Thomas

    2006-01-01

    Detailed surface geometry contributes greatly to the visual realism of 3D face models. However, acquiring high-resolution face geometry is often tedious and expensive. Consequently, most face models used in games, virtual reality, or computer vision look unrealistically smooth. In this paper, we introduce a new statistical technique for the analysis and synthesis of small three-dimensional facial features, such as wrinkles and pores. We acquire high-resolution face geometry for people across ...

  5. Some remarks on the statistical model of heavy ion collisions

    International Nuclear Information System (INIS)

    Koch, V.

    2003-01-01

    This contribution is an attempt to assess what can be learned from the remarkable success of this statistical model in describing ratios of particle abundances in ultra-relativistic heavy ion collisions

  6. Modeling and forecasting energy consumption for heterogeneous buildings using a physical–statistical approach

    International Nuclear Information System (INIS)

    Lü, Xiaoshu; Lu, Tao; Kibert, Charles J.; Viljanen, Martti

    2015-01-01

    Highlights: • This paper presents a new modeling method to forecast energy demands. • The model is based on physical–statistical approach to improving forecast accuracy. • A new method is proposed to address the heterogeneity challenge. • Comparison with measurements shows accurate forecasts of the model. • The first physical–statistical/heterogeneous building energy modeling approach is proposed and validated. - Abstract: Energy consumption forecasting is a critical and necessary input to planning and controlling energy usage in the building sector which accounts for 40% of the world’s energy use and the world’s greatest fraction of greenhouse gas emissions. However, due to the diversity and complexity of buildings as well as the random nature of weather conditions, energy consumption and loads are stochastic and difficult to predict. This paper presents a new methodology for energy demand forecasting that addresses the heterogeneity challenges in energy modeling of buildings. The new method is based on a physical–statistical approach designed to account for building heterogeneity to improve forecast accuracy. The physical model provides a theoretical input to characterize the underlying physical mechanism of energy flows. Then stochastic parameters are introduced into the physical model and the statistical time series model is formulated to reflect model uncertainties and individual heterogeneity in buildings. A new method of model generalization based on a convex hull technique is further derived to parameterize the individual-level model parameters for consistent model coefficients while maintaining satisfactory modeling accuracy for heterogeneous buildings. The proposed method and its validation are presented in detail for four different sports buildings with field measurements. The results show that the proposed methodology and model can provide a considerable improvement in forecasting accuracy

  7. Rényi statistics for testing composite hypotheses in general exponential models

    Czech Academy of Sciences Publication Activity Database

    Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor

    2004-01-01

    Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004

  8. Statistical Property and Model for the Inter-Event Time of Terrorism Attacks

    Science.gov (United States)

    Zhu, Jun-Fang; Han, Xiao-Pu; Wang, Bing-Hong

    2010-06-01

    The inter-event time of terrorism attack events is investigated by empirical data and model analysis. Empirical evidence shows that it follows a scale-free property. In order to understand the dynamic mechanism of such a statistical feature, an opinion dynamic model with a memory effect is proposed on a two-dimensional lattice network. The model mainly highlights the role of individual social conformity and self-affirmation psychology. An attack event occurs when the order parameter indicating the strength of public opposition opinion is smaller than a critical value. Ultimately, the model can reproduce the same statistical property as the empirical data and gives a good understanding for the possible dynamic mechanism of terrorism attacks.

  9. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  10. A combined statistical model for multiple motifs search

    International Nuclear Information System (INIS)

    Gao Lifeng; Liu Xin; Guan Shan

    2008-01-01

    Transcription factor binding sites (TFBS) play key roles in genebior 6.8 wavelet expression and regulation. They are short sequence segments with definite structure and can be recognized by the corresponding transcription factors correctly. From the viewpoint of statistics, the candidates of TFBS should be quite different from the segments that are randomly combined together by nucleotide. This paper proposes a combined statistical model for finding over-represented short sequence segments in different kinds of data set. While the over-represented short sequence segment is described by position weight matrix, the nucleotide distribution at most sites of the segment should be far from the background nucleotide distribution. The central idea of this approach is to search for such kind of signals. This algorithm is tested on 3 data sets, including binding sites data set of cyclic AMP receptor protein in E.coli, PlantProm DB which is a non-redundant collection of proximal promoter sequences from different species, collection of the intergenic sequences of the whole genome of E.Coli. Even though the complexity of these three data sets is quite different, the results show that this model is rather general and sensible. (general)

  11. Statistical-mechanical lattice models for protein-DNA binding in chromatin

    International Nuclear Information System (INIS)

    Teif, Vladimir B; Rippe, Karsten

    2010-01-01

    Statistical-mechanical lattice models for protein-DNA binding are well established as a method to describe complex ligand binding equilibria measured in vitro with purified DNA and protein components. Recently, a new field of applications has opened up for this approach since it has become possible to experimentally quantify genome-wide protein occupancies in relation to the DNA sequence. In particular, the organization of the eukaryotic genome by histone proteins into a nucleoprotein complex termed chromatin has been recognized as a key parameter that controls the access of transcription factors to the DNA sequence. New approaches have to be developed to derive statistical-mechanical lattice descriptions of chromatin-associated protein-DNA interactions. Here, we present the theoretical framework for lattice models of histone-DNA interactions in chromatin and investigate the (competitive) DNA binding of other chromosomal proteins and transcription factors. The results have a number of applications for quantitative models for the regulation of gene expression.

  12. Computer modelling of statistical properties of SASE FEL radiation

    International Nuclear Information System (INIS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1997-01-01

    The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY

  13. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  14. Experimental investigation of statistical models describing distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)

  15. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    KAUST Repository

    Castruccio, Stefano

    2014-03-01

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.

  16. Determination of daily solar ultraviolet radiation using statistical models and artificial neural networks

    Directory of Open Access Journals (Sweden)

    F. J. Barbero

    2006-09-01

    Full Text Available In this study, two different methodologies are used to develop two models for estimating daily solar UV radiation. The first is based on traditional statistical techniques whereas the second is based on artificial neural network methods. Both models use daily solar global broadband radiation as the only measured input. The statistical model is derived from a relationship between the daily UV and the global clearness indices but modulated by the relative optical air mass. The inputs to the neural network model were determined from a large number of radiometric and atmospheric parameters using the automatic relevance determination method, although only the daily solar global irradiation, daily global clearness index and relative optical air mass were shown to be the optimal input variables. Both statistical and neural network models were developed using data measured at Almería (Spain, a semiarid and coastal climate, and tested against data from Table Mountain (Golden, CO, USA, a mountainous and dry environment. Results show that the statistical model performs adequately in both sites for all weather conditions, especially when only snow-free days at Golden were considered (RMSE=4.6%, MBE= –0.1%. The neural network based model provides the best overall estimates in the site where it has been trained, but presents an inadequate performance for the Golden site when snow-covered days are included (RMSE=6.5%, MBE= –3.0%. This result confirms that the neural network model does not adequately respond on those ranges of the input parameters which were not used for its development.

  17. A statistical skull geometry model for children 0-3 years old.

    Directory of Open Access Journals (Sweden)

    Zhigang Li

    Full Text Available Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO. To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  18. A statistical skull geometry model for children 0-3 years old.

    Science.gov (United States)

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  19. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  20. A Statistical Graphical Model of the California Reservoir System

    Science.gov (United States)

    Taeb, A.; Reager, J. T.; Turmon, M.; Chandrasekaran, V.

    2017-11-01

    The recent California drought has highlighted the potential vulnerability of the state's water management infrastructure to multiyear dry intervals. Due to the high complexity of the network, dynamic storage changes in California reservoirs on a state-wide scale have previously been difficult to model using either traditional statistical or physical approaches. Indeed, although there is a significant line of research on exploring models for single (or a small number of) reservoirs, these approaches are not amenable to a system-wide modeling of the California reservoir network due to the spatial and hydrological heterogeneities of the system. In this work, we develop a state-wide statistical graphical model to characterize the dependencies among a collection of 55 major California reservoirs across the state; this model is defined with respect to a graph in which the nodes index reservoirs and the edges specify the relationships or dependencies between reservoirs. We obtain and validate this model in a data-driven manner based on reservoir volumes over the period 2003-2016. A key feature of our framework is a quantification of the effects of external phenomena that influence the entire reservoir network. We further characterize the degree to which physical factors (e.g., state-wide Palmer Drought Severity Index (PDSI), average temperature, snow pack) and economic factors (e.g., consumer price index, number of agricultural workers) explain these external influences. As a consequence of this analysis, we obtain a system-wide health diagnosis of the reservoir network as a function of PDSI.

  1. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  2. ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING

    Directory of Open Access Journals (Sweden)

    Palas Roy

    2013-01-01

    Full Text Available High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Multivariate data analysis was done with the collected groundwater samples from the 132 tubewells of this contaminated region shows that three variable parameters are significantly related with the arsenic. Based on these relationships, a multiple linear regression model has been developed that estimated the arsenic contamination by measuring such three predictor parameters of the groundwater variables in the contaminated aquifer. This model could also be a suggestive tool while designing the arsenic removal scheme for any affected groundwater.

  3. Schedulability of Herschel revisited using statistical model checking

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2015-01-01

    -approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...... and blocking times of tasks. Consequently, the method may falsely declare deadline violations that will never occur during execution. This paper is a continuation of previous work of the authors in applying extended timed automata model checking (using the tool UPPAAL) to obtain more exact schedulability...... analysis, here in the presence of non-deterministic computation times of tasks given by intervals [BCET,WCET]. Computation intervals with preemptive schedulers make the schedulability analysis of the resulting task model undecidable. Our contribution is to propose a combination of model checking techniques...

  4. Formation and decay of hot nuclei in 40 Ca + ''40 Ca at 35 MeV/nucleon

    International Nuclear Information System (INIS)

    Planeta, R.; Gawlikowicz, W.; Grotowski, K.

    2000-01-01

    Properties of multifragmentation of 'hot sources' produced in the 40 Ca + 40 Ca reaction have been studied at a beam energy of 35 MeV/nucleon. Two signatures of prompt multifragmentation which make use of special features of particle emission from the 'freeze out volume' together with an analysis of the reduced relative velocity between pairs of intermediate mass fragments indicate the presence of a transition from the sequential decay to prompt multifragmentation at an excitation energy of about 3 MeV/nucleon. (authors)

  5. Formation and Decay of Hot Nuclei in Heavy Ion Collisions

    International Nuclear Information System (INIS)

    Planeta, R.; Gawlikowicz, W.; Grotowski, K.

    2000-01-01

    The properties of the multifragmentation of ''hot sources'' produced in the 40 Ca+ 40 Ca reaction have been studied at a beam energy 35 MeV/nucleon. Two signatures of prompt multifragmentation, which make use of special features of particle emission from the ''freeze out volume'', together with an analysis of the reduced relative velocity between pairs of intermediate mass fragments, indicate the presence of a transition from sequential decay to prompt multifragmentation at an excitation energy of about 3 MeV/nucleon. (author)

  6. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  7. Statistical pairwise interaction model of stock market

    Science.gov (United States)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  8. GIGMF - A statistical model program

    International Nuclear Information System (INIS)

    Vladuca, G.; Deberth, C.

    1978-01-01

    The program GIGMF computes the differential and integrated statistical model cross sections for the reactions proceeding through a compound nuclear stage. The computational method is based on the Hauser-Feshbach-Wolfenstein theory, modified to include the modern version of Tepel et al. Although the program was written for a PDP-15 computer, with 16K high speed memory, many reaction channels can be taken into account with the following restrictions: the pro ectile spin must be less than 2, the maximum spin momenta of the compound nucleus can not be greater than 10. These restrictions are due solely to the storage allotments and may be easily relaxed. The energy of the impinging particle, the target and projectile masses, the spin and paritjes of the projectile, target, emergent and residual nuclei the maximum orbital momentum and transmission coefficients for each reaction channel are the input parameters of the program. (author)

  9. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  10. Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model

    Directory of Open Access Journals (Sweden)

    Peng Heping

    2017-01-01

    Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.

  11. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Manuel Perez Malumbres

    2013-02-01

    Full Text Available In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation, we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc., an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc..

  12. Analytical model of SiPM time resolution and order statistics with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2015-01-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented

  13. Analytical model of SiPM time resolution and order statistics with crosstalk

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)

    2015-07-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.

  14. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  15. Numerical and Qualitative Contrasts of Two Statistical Models for Water Quality Change in Tidal Waters

    Science.gov (United States)

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and...

  16. Nonlinear Fluctuation Behavior of Financial Time Series Model by Statistical Physics System

    Directory of Open Access Journals (Sweden)

    Wuyang Cheng

    2014-01-01

    Full Text Available We develop a random financial time series model of stock market by one of statistical physics systems, the stochastic contact interacting system. Contact process is a continuous time Markov process; one interpretation of this model is as a model for the spread of an infection, where the epidemic spreading mimics the interplay of local infections and recovery of individuals. From this financial model, we study the statistical behaviors of return time series, and the corresponding behaviors of returns for Shanghai Stock Exchange Composite Index (SSECI and Hang Seng Index (HSI are also comparatively studied. Further, we investigate the Zipf distribution and multifractal phenomenon of returns and price changes. Zipf analysis and MF-DFA analysis are applied to investigate the natures of fluctuations for the stock market.

  17. The mass (charge) spectrum of superheavy nuclei fission fragments: the new perspectives for the theory of nucleosynthesis

    International Nuclear Information System (INIS)

    Maslyuk, V.T.

    2012-01-01

    A new approach to the problem of nucleosynthesis based on assumption of a nuclear matter or superheavy nuclei series fragmentation up to atomic nuclei is proposed. It is shown that studies of the mass (charge) fragments yields (MCFY) after nuclear matter disintegration is possible within proposed statistical theory. The data of MCFY calculation for exotic superheavy nuclei multifragmentation with A=300, 900 and 1200 and arbitrary Z values are demonstrated

  18. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  19. Statistical properties of three-dimensional two-fluid plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Qaisrani, M. Hasnain [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, WuHan, Hubei 430074 (China); Xia, ZhenWei [Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China); Zou, Dandan, E-mail: ddzou@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, WuHan, Hubei 430074 (China); School of Physics and Optoelectronic Engineering, Yangtze University, Jingzhou 434023 (China)

    2015-09-15

    The nonlinear dynamics of incompressible non-dissipative two-fluid plasma model is investigated through classical Gibbs ensemble methods. Liouville's theorem of phase space for each wave number is proved, and the absolute equilibrium spectra for Galerkin truncated two-fluid model are calculated. In two-fluid theory, the equilibrium is built on the conservation of three quadratic invariants: the total energy and the self-helicities for ions and electrons fluid, respectively. The implications of statistic equilibrium spectra with arbitrary ratios of conserved invariants are discussed.

  20. Penultimate modeling of spatial extremes: statistical inference for max-infinitely divisible processes

    KAUST Repository

    Huser, Raphaë l; Opitz, Thomas; Thibaud, Emeric

    2018-01-01

    Extreme-value theory for stochastic processes has motivated the statistical use of max-stable models for spatial extremes. However, fitting such asymptotic models to maxima observed over finite blocks is problematic when the asymptotic stability

  1. Hadronic equation of state in the statistical bootstrap model and linear graph theory

    International Nuclear Information System (INIS)

    Fre, P.; Page, R.

    1976-01-01

    Taking a statistical mechanical point og view, the statistical bootstrap model is discussed and, from a critical analysis of the bootstrap volume comcept, it is reached a physical ipothesis, which leads immediately to the hadronic equation of state provided by the bootstrap integral equation. In this context also the connection between the statistical bootstrap and the linear graph theory approach to interacting gases is analyzed

  2. Addressing economic development goals through innovative teaching of university statistics: a case study of statistical modelling in Nigeria

    Science.gov (United States)

    Oseloka Ezepue, Patrick; Ojo, Adegbola

    2012-12-01

    A challenging problem in some developing countries such as Nigeria is inadequate training of students in effective problem solving using the core concepts of their disciplines. Related to this is a disconnection between their learning and socio-economic development agenda of a country. These problems are more vivid in statistical education which is dominated by textbook examples and unbalanced assessment 'for' and 'of' learning within traditional curricula. The problems impede the achievement of socio-economic development objectives such as those stated in the Nigerian Vision 2020 blueprint and United Nations Millennium Development Goals. They also impoverish the ability of (statistics) graduates to creatively use their knowledge in relevant business and industry sectors, thereby exacerbating mass graduate unemployment in Nigeria and similar developing countries. This article uses a case study in statistical modelling to discuss the nature of innovations in statistics education vital to producing new kinds of graduates who can link their learning to national economic development goals, create wealth and alleviate poverty through (self) employment. Wider implications of the innovations for repositioning mathematical sciences education globally are explored in this article.

  3. Huffman and linear scanning methods with statistical language models.

    Science.gov (United States)

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  4. A statistical model for radar images of agricultural scenes

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  5. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  6. Statistical Multipath Model Based on Experimental GNSS Data in Static Urban Canyon Environment

    Directory of Open Access Journals (Sweden)

    Yuze Wang

    2018-04-01

    Full Text Available A deep understanding of multipath characteristics is essential to design signal simulators and receivers in global navigation satellite system applications. As a new constellation is deployed and more applications occur in the urban environment, the statistical multipath models of navigation signal need further study. In this paper, we present statistical distribution models of multipath time delay, multipath power attenuation, and multipath fading frequency based on the experimental data in the urban canyon environment. The raw data of multipath characteristics are obtained by processing real navigation signal to study the statistical distribution. By fitting the statistical data, it shows that the probability distribution of time delay follows a gamma distribution which is related to the waiting time of Poisson distributed events. The fading frequency follows an exponential distribution, and the mean of multipath power attenuation decreases linearly with an increasing time delay. In addition, the detailed statistical characteristics for different elevations and orbits satellites is studied, and the parameters of each distribution are quite different. The research results give useful guidance for navigation simulator and receiver designers.

  7. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  8. Statistical model for expected un supplied energy; Statistisk modell for forventet ILE

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    Results from a statistical analysis of expected un supplied energy for Norwegian network companies are presented. The data are from the years 1996-2004. The estimation model includes several explanatory variables that together reflect the characteristics of the network, climatic aspects and other geographical conditions. The model has a high degree of accuracy when compared to the historical number of un supplied energy for about 90 percent of the network companies. But for 12 companies there are substantial, negative deviances that are not compatible with the available data. There is reason to believe that improved data for some types of variables can improve the accuracy of the model. In addition to establishing a norm for expected un supplied energy in the revenue estimations, the model can be used to reflect geographical constraints in NVEs (Norwegian Water and Energy directorate) efficiency analyses (ml)

  9. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  10. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    Science.gov (United States)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  11. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    Science.gov (United States)

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  12. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  13. Using continuous time stochastic modelling and nonparametric statistics to improve the quality of first principles models

    DEFF Research Database (Denmark)

    A methodology is presented that combines modelling based on first principles and data based modelling into a modelling cycle that facilitates fast decision-making based on statistical methods. A strong feature of this methodology is that given a first principles model along with process data......, the corresponding modelling cycle model of the given system for a given purpose. A computer-aided tool, which integrates the elements of the modelling cycle, is also presented, and an example is given of modelling a fed-batch bioreactor....

  14. Can spatial statistical river temperature models be transferred between catchments?

    Science.gov (United States)

    Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.

    2017-09-01

    There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across

  15. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  16. Statistical models for thermal ageing of steel materials in nuclear power plants

    International Nuclear Information System (INIS)

    Persoz, M.

    1996-01-01

    Some category of steel materials in nuclear power plants may be subjected to thermal ageing, whose extent depends on the steel chemical composition and the ageing parameters, i.e. temperature and duration. This ageing affects the 'impact strength' of the materials, which is a mechanical property. In order to assess the residual lifetime of these components, a probabilistic study has been launched, which takes into account the scatter over the input parameters of the mechanical model. Predictive formulae for estimating the impact strength of aged materials are important input data of the model. A data base has been created with impact strength results obtained from an ageing program in laboratory and statistical treatments have been undertaken. Two kinds of model have been developed, with non linear regression methods (PROC NLIN, available in SAS/STAT). The first one, using a hyperbolic tangent function, is partly based on physical considerations, and the second one, of an exponential type, is purely statistically built. The difficulties consist in selecting the significant parameters and attributing initial values to the coefficients, which is a requirement of the NLIN procedure. This global statistical analysis has led to general models that are unction of the chemical variables and the ageing parameters. These models are as precise (if not more) as local models that had been developed earlier for some specific values of ageing temperature and ageing duration. This paper describes the data and the methodology used to build the models and analyses the results given by the SAS system. (author)

  17. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  18. Tornadoes and related damage costs: statistical modelling with a semi-Markov approach

    Directory of Open Access Journals (Sweden)

    Guglielmo D’Amico

    2016-09-01

    Full Text Available We propose a statistical approach to modelling for predicting and simulating occurrences of tornadoes and accumulated cost distributions over a time interval. This is achieved by modelling the tornado intensity, measured with the Fujita scale, as a stochastic process. Since the Fujita scale divides tornado intensity into six states, it is possible to model the tornado intensity by using Markov and semi-Markov models. We demonstrate that the semi-Markov approach is able to reproduce the duration effect that is detected in tornado occurrence. The superiority of the semi-Markov model as compared to the Markov chain model is also affirmed by means of a statistical test of hypothesis. As an application, we compute the expected value and the variance of the costs generated by the tornadoes over a given time interval in a given area. The paper contributes to the literature by demonstrating that semi-Markov models represent an effective tool for physical analysis of tornadoes as well as for the estimation of the economic damages to human things.

  19. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  20. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor

  1. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Proceedings of the Inernaional Symposium on Nuclear Physics - Part II. pp 1-1. Foreword ... The canonical and grand canonical models for nuclear multifragmentation · G Chaudhuri S Das ... Multi-reaction-channel fitting calculations in a coupled-channel model: Photoinduced strangeness production · O Scholten A Usov.

  2. α -induced reactions on 115In: Cross section measurements and statistical model analysis

    Science.gov (United States)

    Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.

    2018-05-01

    Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also

  3. Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification

    Directory of Open Access Journals (Sweden)

    A. Sarri

    2012-06-01

    Full Text Available Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake. Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the outer product emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the leave-one-out method.

  4. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    Science.gov (United States)

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  5. Fractional statistics in 2+1 dimensions through the Gaussian model

    International Nuclear Information System (INIS)

    Murthy, G.

    1986-01-01

    The free massless field in 2+1 dimensions is written as an ''integral'' over free massless fields in 1+1 dimensions. Taking the operators with fractional dimension in the Gaussian model as a springboard we construct operators with fractional statistics in the former theory

  6. Statistical sampling and modelling for cork oak and eucalyptus stands

    NARCIS (Netherlands)

    Paulo, M.J.

    2002-01-01

    This thesis focuses on the use of modern statistical methods to solve problems on sampling, optimal cutting time and agricultural modelling in Portuguese cork oak and eucalyptus stands. The results are contained in five chapters that have been submitted for publication

  7. The nuclear liquid-vapor phase transition: Equilibrium between phases or free decay in vacuum?

    International Nuclear Information System (INIS)

    Phair, L.; Moretto, L.G.; Elliott, J.B.; Wozniak, G.J.

    2002-01-01

    Recent analyses of multifragmentation in terms of Fisher's model and the related construction of a phase diagram brings forth the problem of the true existence of the vapor phase and the meaning of its associated pressure. Our analysis shows that a thermal emission picture is equivalent to a Fisher-like equilibrium description which avoids the problem of the vapor and explains the recently observed Boltzmann-like distribution of the emission times. In this picture a simple Fermi gas thermometric relation is naturally justified. Low energy compound nucleus emission of intermediate mass fragments is shown to scale according to Fisher's formula and can be simultaneously fit with the much higher energy ISiS multifragmentation data

  8. Snow cover and End of Summer Snowline statistics from a simple stochastic model

    Science.gov (United States)

    Petrelli, A.; Crouzy, B.; Perona, P.

    2012-04-01

    One essential parameter characterizing snow cover statistics is the End Of Summer Snowline (EOSS), which is also a good indicator of actual climatic trends in mountain regions. EOSS is usually modelled by means of spatially distributed physically based models, and typically require heavy parameterization. In this paper we validate the simple stochastic model proposed by Perona et al. (2007), by showing that the snow cover statistics and the position of EOSS can in principle be explained by only four essential (meteorological) parameters. Perona et al. (2007) proposed a model accounting for stochastic snow accumulation in the cold period, and deterministic melting dynamics in the warm period, and studied the statistical distribution of the snowdepth on the long term. By reworking the ensemble average of the steady state evolution equation we single out a relationship between the snowdepth statistics (including the position of EOSS) and the involved parameters. The validation of the established relationship is done using 50 years of field data from 73 Swiss stations located above 2000 m a.s.l. First an estimation of the meteorological parameters is made. Snow height data are used as a precipitation proxy, using temperature data to estimate SWE during the precipitation event. Thresholds are used both to separate accumulation from actual precipitation and wind transport phenomena, and to better assess summer melting rate, considered to be constant over the melting period according to the simplified model. First results show that data for most of the weather stations actually scales with the proposed relationship. This indicates that, on the long term, the effect of spatial and temporal noise masks most of the process detail so that minimalist models suffice to obtain reliable statistics. Future works will test the validity of this approach at different spatial scales, e.g., regional, continental and planetary. Reference: P. Perona, A. Porporato, and L. Ridolfi, "A

  9. Statistical Models for Inferring Vegetation Composition from Fossil Pollen

    Science.gov (United States)

    Paciorek, C.; McLachlan, J. S.; Shang, Z.

    2011-12-01

    Fossil pollen provide information about vegetation composition that can be used to help understand how vegetation has changed over the past. However, these data have not traditionally been analyzed in a way that allows for statistical inference about spatio-temporal patterns and trends. We build a Bayesian hierarchical model called STEPPS (Spatio-Temporal Empirical Prediction from Pollen in Sediments) that predicts forest composition in southern New England, USA, over the last two millenia based on fossil pollen. The critical relationships between abundances of tree taxa in the pollen record and abundances in actual vegetation are estimated using modern (Forest Inventory Analysis) data and (witness tree) data from colonial records. This gives us two time points at which both pollen and direct vegetation data are available. Based on these relationships, and incorporating our uncertainty about them, we predict forest composition using fossil pollen. We estimate the spatial distribution and relative abundances of tree species and draw inference about how these patterns have changed over time. Finally, we describe ongoing work to extend the modeling to the upper Midwest of the U.S., including an approach to infer tree density and thereby estimate the prairie-forest boundary in Minnesota and Wisconsin. This work is part of the PalEON project, which brings together a team of ecosystem modelers, paleoecologists, and statisticians with the goal of reconstructing vegetation responses to climate during the last two millenia in the northeastern and midwestern United States. The estimates from the statistical modeling will be used to assess and calibrate ecosystem models that are used to project ecological changes in response to global change.

  10. Bayesian Nonparametric Statistical Inference for Shock Models and Wear Processes.

    Science.gov (United States)

    1979-12-01

    also note that the results in Section 2 do not depend on the support of F .) This shock model have been studied by Esary, Marshall and Proschan (1973...Barlow and Proschan (1975), among others. The analogy of the shock model in risk and acturial analysis has been given by BUhlmann (1970, Chapter 2... Mathematical Statistics, Vol. 4, pp. 894-906. Billingsley, P. (1968), CONVERGENCE OF PROBABILITY MEASURES, John Wiley, New York. BUhlmann, H. (1970

  11. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  12. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  13. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  14. Statistical model of exotic rotational correlations in emergent space-time

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan

    2017-06-06

    A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictions for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.

  15. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  16. Comments on statistical issues in numerical modeling for underground nuclear test monitoring

    International Nuclear Information System (INIS)

    Nicholson, W.L.; Anderson, K.K.

    1993-01-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks

  17. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    Science.gov (United States)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  18. Small nodule detectability evaluation using a generalized scan-statistic model

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M; Lewitt, Robert M

    2006-01-01

    In this paper is investigated the use of the scan statistic for evaluating the detectability of small nodules in medical images. The scan-statistic method is often used in applications in which random fields must be searched for abnormal local features. Several results of the detection with localization theory are reviewed and a generalization is presented using the noise nodule distribution obtained by scanning arbitrary areas. One benefit of the noise nodule model is that it enables determination of the scan-statistic distribution by using only a few image samples in a way suitable both for simulation and experimental setups. Also, based on the noise nodule model, the case of multiple targets per image is addressed and an image abnormality test using the likelihood ratio and an alternative test using multiple decision thresholds are derived. The results obtained reveal that in the case of low contrast nodules or multiple nodules the usual test strategy based on a single decision threshold underperforms compared with the alternative tests. That is a consequence of the fact that not only the contrast or the size, but also the number of suspicious nodules is a clue indicating the image abnormality. In the case of the likelihood ratio test, the multiple clues are unified in a single decision variable. Other tests that process multiple clues differently do not necessarily produce a unique ROC curve, as shown in examples using a test involving two decision thresholds. We present examples with two-dimensional time-of-flight (TOF) and non-TOF PET image sets analysed using the scan statistic for different search areas, as well as the fixed position observer

  19. Statistical modelling of monthly mean sea level at coastal tide gauge stations along the Indian subcontinent

    Digital Repository Service at National Institute of Oceanography (India)

    Srinivas, K.; Das, V.K.; DineshKumar, P.K.

    This study investigates the suitability of statistical models for their predictive potential for the monthly mean sea level at different stations along the west and east coasts of the Indian subcontinent. Statistical modelling of the monthly mean...

  20. Spherical Process Models for Global Spatial Statistics

    KAUST Repository

    Jeong, Jaehong

    2017-11-28

    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.