WorldWideScience

Sample records for weisskopf model

  1. Memorial symposium for Victor Weisskopf.

    CERN Multimedia

    Maximilien Brice

    2002-01-01

    A memorial symposium for Victor Weisskopf, CERN Director-General from 1961 to 1965, was held at CERN on 17 September 2002. Photo 01: L. Maiani: Welcome.Photo 02: J. D. Jackson: Highlights from the career and scientific works of Victor F. Weisskopf.Photos 05 09: M. Hine and K. Johnsen: Working with Viki at CERN.Photo 10: M. Jacob: Knowledge and Wonder.Photo 14: K. Worth (Viki's daughter): Reminiscences.

  2. Crisis - Weisskopf's view

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1993-10-15

    'We are facing a crisis, not only in particle physics but in the whole of fundamental science', said Victor Weisskopf, doyen of quantum physics, during his traditional summer CERN stopover. 'Basic science - science for its own sake - and especially high energy physics, is really in danger.' As well as explaining how this has come about, the former CERN Director General (1961-5) proposed action to reverse the trend. Rather than dividing science into the conventional 'big' and 'small' camps, he slices across another axis. On one hand there is obviously applicable 'terrestrial science' - biology, medicine, solid state, much of the nuclear sector, nonlinear behaviour, chaos,....all directly connected with processes that happen on Earth. On the other there is 'cosmic science' - astronomy, astrophysics, particle physics and some of the nuclear sector - addressing deeper issues, not attainable naturally on this planet at all, and where applications are less immediately obvious. (This classification is not completely watertight - even cosmic science can, and does, foster immediate spinoff, Weisskopf points out, citing Georges Charpak's detector work.) Tracing the evolution of science in this century, Weisskopf sees the rapid evolution of American influence in the 1930s as a turning point. Before then, the United States had not been in the front line, and it had been important for US researchers to spend some time in Europe.

  3. Memorial Symposium for Victor Weisskopf

    CERN Multimedia

    2002-01-01

    Victor 'Viki' Weisskopf, former Director General of CERN from 1961 to 1965, passed away five months ago. At that time, the Bulletin dedicated its coverpage to this brilliant physicist (19-20/2002). Now, CERN has organised a Memorial Symposium for next Tuesday 17 September, where you are cordially invited. This tribute will include the following speechs: L. Maiani: Welcome J. D. Jackson: Highlights from the career and scientific works of Victor F. Weisskopf M. Hine and K. Johnsen: Working with Viki at CERN M. Jacob: Knowledge and Wonder A member of Viki's family: Reminiscences. The Memorial Symposium will take place in the Main Auditorium at 15h. Drinks will be served in Pas Perdus at 17h 30.

  4. Crisis - Weisskopf's view

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    'We are facing a crisis, not only in particle physics but in the whole of fundamental science', said Victor Weisskopf, doyen of quantum physics, during his traditional summer CERN stopover. 'Basic science - science for its own sake - and especially high energy physics, is really in danger.' As well as explaining how this has come about, the former CERN Director General (1961-5) proposed action to reverse the trend. Rather than dividing science into the conventional 'big' and 'small' camps, he slices across another axis. On one hand there is obviously applicable 'terrestrial science' - biology, medicine, solid state, much of the nuclear sector, nonlinear behaviour, chaos,....all directly connected with processes that happen on Earth. On the other there is 'cosmic science' - astronomy, astrophysics, particle physics and some of the nuclear sector - addressing deeper issues, not attainable naturally on this planet at all, and where applications are less immediately obvious. (This classification is not completely watertight - even cosmic science can, and does, foster immediate spinoff, Weisskopf points out, citing Georges Charpak's detector work.) Tracing the evolution of science in this century, Weisskopf sees the rapid evolution of American influence in the 1930s as a turning point. Before then, the United States had not been in the front line, and it had been important for US researchers to spend some time in Europe

  5. Memorial Symposium for Victor Weisskopf

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit

    2002-01-01

    This tribute will include the following speechs: L. Maiani: Welcome J. D. Jackson: Highlights from the career and scientific works of Victor F. Weisskopf M. Hine and K. Johnsen: Working with Viki at CERN M. Jacob: Knowledge and Wonder A member of Viki's family: Reminiscences

  6. Obituary Professor Victor Weisskopf - atom-bomb and CERN physicist

    CERN Multimedia

    Dalyell, T

    2002-01-01

    The rise of Nazism brought horror, humiliation, death and torture to so many free-thinking people - Jews, and other minorities. There were some lucky ones, like the Weisskopf family, who were able to escape. Victor Weisskopf was born and brought up in Austria in the spirit of German culture and said that he considered his transfer from Europe to the USA an invaluable source of intellectual enrichment (2 pages).

  7. Comparison Between Weisskopf and Thomas-Fermi Model for Particle Emission Widths from Hot Deformed Nuclei

    International Nuclear Information System (INIS)

    Surowiec, Aa.; Pomorski, K.; Schmitt, Ch.; Bartel, J.

    2002-01-01

    The emission widths Γ n and Γ p for emission of neutrons and protons are calculated within the Thomas-Fermi model, which we have recently developed, and are compared with those obtained in the usual Weisskopf approach for the case of zero angular momentum. Both methods yield quite similar results at small deformations, but rather important differences are observed for very deformed shapes, in particular for charged particles. A possible generalization of the model for emission of α-particles is also discussed. (author)

  8. Particle correlations in the recombination model associated with modified Kuti-Weisskopf structure functions

    International Nuclear Information System (INIS)

    Takasugi, E.; Tata, X.

    1982-01-01

    The recombination model associated with modified Kuti-Weisskopf multiquark structure functions is used to analyze particle production by hadronic collisions. The justification of the use of the impulse approximation in these processes and the universal nature of the recombination process are discussed. Single-meson inclusive production in the fragmentation domains of the proton, the pion, and the kaon is used as an input to determine the primitive structure functions. Our parameter-free predictions for low-p/sub T/ multimeson and associated meson-baryon inclusive production are found to be in good agreement with a large amount of recently obtained correlation data. It is pointed out, however, that reactions involving multivalence recombination fall outside the scope of present considerations

  9. Professor V. Weisskopf, CERN Director General (1961-1965)

    CERN Document Server

    1962-01-01

    Well known theoretical physicist Victor Weisskopf has died aged 93. Born in Austria, he later worked with Schrodinger in Berlin before emigrating to the US in 1937, where he joined the Manhattan project in 1944, and was witness to the Trinity Test in July 1945. In 1946 he became professor of physics at MIT. He took leave of absence to be Director General of CERN, the European Organization for Nuclear Physics, from 1961-1965.

  10. Duality and corrections to the van Royen-Weisskopf formula

    International Nuclear Information System (INIS)

    Durand, B.; Durand, L.

    1981-01-01

    We propose that duality can be used in conjunction with QCD calculations of the cross section for e + e - → qanti q - to evaluate relativistic and radiative corrections to the leptonic widths of the psi and UPSILON states. We use this method to discuss relativistic corrections to the van Royen-Weisskopf formula for leptonic widths. We also point out that this formula is in error by an important factor 4m 2 sub(q)/M 2 sub(n). (orig.)

  11. Semigroup evolution in the Wigner-Weisskopf pole approximation with Markovian spectral coupling

    International Nuclear Information System (INIS)

    Shikerman, F.; Peer, A.; Horwitz, L. P.

    2011-01-01

    We establish the relation between the Wigner-Weisskopf theory for the description of an unstable system and the theory of coupling to an environment. According to the Wigner-Weisskopf general approach, even within the pole approximation, the evolution of a total system subspace is not an exact semigroup for multichannel decay unless the projectors into eigenstates of the reduced evolution generator W(z) are orthogonal. With multichannel decay, the projectors must be evaluated at different pole locations z α ≠z β , and since the orthogonality relation does not generally hold at different values of z, the semigroup evolution is a poor approximation for the multichannel decay, even for very weak coupling. Nevertheless, if the theory is generalized to take into account interactions with an environment, one can ensure orthogonality of the W(z) projectors regardless of the number of poles. Such a possibility occurs when W(z), and hence its eigenvectors, is independent of z, which corresponds to the Markovian limit of the coupling to the continuum spectrum.

  12. Semigroup evolution in the Wigner-Weisskopf pole approximation with Markovian spectral coupling

    Energy Technology Data Exchange (ETDEWEB)

    Shikerman, F.; Peer, A. [Physics department and BINA center for nano-technology, Bar Ilan University, Ramat Gan 52900 (Israel); Horwitz, L. P. [Physics department and BINA center for nano-technology, Bar Ilan University, Ramat Gan 52900 (Israel); School of Physics, Tel-Aviv University, Ramat-Aviv 69978 (Israel); Department of Physics, Ariel University Center of Samaria, Ariel 40700 (Israel)

    2011-07-15

    We establish the relation between the Wigner-Weisskopf theory for the description of an unstable system and the theory of coupling to an environment. According to the Wigner-Weisskopf general approach, even within the pole approximation, the evolution of a total system subspace is not an exact semigroup for multichannel decay unless the projectors into eigenstates of the reduced evolution generator W(z) are orthogonal. With multichannel decay, the projectors must be evaluated at different pole locations z{sub {alpha}}{ne}z{sub {beta}}, and since the orthogonality relation does not generally hold at different values of z, the semigroup evolution is a poor approximation for the multichannel decay, even for very weak coupling. Nevertheless, if the theory is generalized to take into account interactions with an environment, one can ensure orthogonality of the W(z) projectors regardless of the number of poles. Such a possibility occurs when W(z), and hence its eigenvectors, is independent of z, which corresponds to the Markovian limit of the coupling to the continuum spectrum.

  13. Systematic measurements of the Bohr-Weisskopf effect at ISOLDE

    CERN Multimedia

    Nojiri, Y; Matsuki, S; Ragnarsson, I; Neugart, R; Redi, O; Stroke, H H; Duong, H T; Marescaux, D; Pinard, J; Juncar, P; Ekstrom, C; Pellarin, M; Vialle, J-L; Inamura, T

    2002-01-01

    The " Bohr-Weisskopf " effect, or " hyperfine structure (hfs) anomaly ", which results from the effect of the distribution of nuclear magnetization on the electro-nuclear interaction, will be measured systematically at the PS Booster ISOLDE, first for a long chain of radioactive cesium isotopes, analogously to previous isotope shift and hfs studies. In addition to the direct measurement of magnetic moment values, the results are expected to provide independent data for testing nuclear wavefunctions, these will be of importance for interpreting systematic parity non-conservation experiments, complementary to the single isotope study which requires a high precision knowledge of the electron wavefunction. Substantial progress in these calculations has been achieved recently. Precision measurements of the hfs splittings and nuclear magnetic moments are required, with sensitivity adequate for the radioactive isotopes produced. A triple resonance atomic beam magnetic resonance apparatus with optical pumping state s...

  14. Systematic Measurements of the Bohr-Weisskopf Effect at ISOLDE

    CERN Multimedia

    2002-01-01

    Nuclear electric and magnetic structure properties are measurable by high-resolution atomic spectroscopy through isotope shifts and the Bohr-Weisskopf effect (hyperfine structure anomalies). \\\\ \\\\ The greatest value of these measurements is when made systematically over a large number of isotopes. This has been done in the case of isotopes shifts most extensively by the experiment at ISOLDE. To date the magnetic distribution studies are few and isolated. Here we propose to intitiate a program at ISOLDE to measure hfs anomalies systematically. The experiments, requiring high-precision data on magnetic dipole constants as well as on nuclear g-factors, will be done by atomic-beam magnetic resonance with the use of laser excitation for polarization of the beam and a sixpole magnet acting as an analyser. \\\\ \\\\ The heavy alkali elements are the most promising candidates for hfs anomaly studies because of the large effect expected, the high production yields at ISOLDE and most importantly, the interesting variations...

  15. Recombination model and baryon production by pp and πp collisions

    International Nuclear Information System (INIS)

    Takasugi, E.; Tata, X.

    1979-12-01

    The recombination model predictions for baryon production, using modified Kuti-Weisskopf structure functions, are in good agreement with the pp and πp collision data. The indistinguishability of sea quarks naturally accounts for the difference in the p and anti p spectra in the pion fragmentation region. 4 figures, 2 tables

  16. Complex energy eigenstates in a model with two equal mass particles

    Energy Technology Data Exchange (ETDEWEB)

    Gleiser, R J; Reula, D A; Moreschi, O M [Universidad Nacional de Cordoba (Argentina). Inst. de Matematica, Astronomia y Fisica

    1980-09-01

    The properties of a simples quantum mechanical model for the decay of two equal mass particles are studied and related to some recent work on complex energy eigenvalues. It consists essentially in a generalization of the Lee-Friedrichs model for an unstable particle and gives a highly idealized version of the K/sup 0/-anti K/sup 0/ system, including CP violations. The model is completely solvable, thus allowing a comparison with the well known Weisskopf-Wigner formalism for the decay amplitudes. A different model, describing the same system is also briefly outlined.

  17. Viewpoint. The decision to build the ISR and its consequences: the role of Victor Weisskopf

    International Nuclear Information System (INIS)

    Hine, Mervyn

    1995-01-01

    December 1995 marks the thirtieth anniversary of the decision by the CERN Council to approve the proposal by the then Director General Victor Weisskopf to build the Intersecting Storage Rings (ISR). It was probably the Council's most important decision since setting up the Laboratory, judging by its consequences for physics and for CERN over the following three decades. December is also the twenty-fifth anniversary of the startup of the machine which led to a period of developments in accelerator technology with far reaching consequences. They included the production of stable high current stacked beams with high interaction rates in the presence of non-linear resonances and space charge, the inventions of stochastic cooling and of beam instrumentation using Schottky scans, and the technology of large ultra-high vacuum chambers with controlled wall impedance and resonances, to name only a few. Two results from the ISR physics programme of similar importance for the future were the first observation of large particle and photon yields with high transverse momentum (indicative of quark-gluon processes deep inside the colliding protons), and the consequent development of concepts for detectors with full solid angle calorimetry and tracking coverage, now used on all colliders. The technical success of the machine and the significance of the physics results led to a radical switch in physicists' thinking everywhere, away from fixed targets and towards colliders as the way to higher collision energies for hadrons. The ISR ceased to be seen as a toy for accelerator builders and became a window into a future far beyond the possibilities of fixed target accelerators. The ISR advances in accelerator and detector technology stimulated the conversion of the SPS to a proton-antiproton collider and the similar conversion of the Fermilab machine, and led to the proposals for the Superconducting Supercollider (SSC) in the USA and the LHC in Europe, to plans for a

  18. Comparison of methods for calculating decay lifetimes

    International Nuclear Information System (INIS)

    Tobocman, W.

    1978-01-01

    A simple scattering model is used to test alternative methods for calculating decay lifetimes, or equivalently, resonance widths. We consider the scattering of s-wave particles by a square well with a square barrier. Exact values for resonance energies and resonance widths are compared with values calculated from Wigner-Weisskopf perturbation theory and from the Garside-MacDonald projection operator formalism. The Garside-MacDonald formalism gives essentially exact results while the predictions of the Wigner-Weisskopf formalism are fairly poor

  19. Hydrodynamics of a quark droplet

    DEFF Research Database (Denmark)

    Bjerrum-Bohr, Johan J.; Mishustin, Igor N.; Døssing, Thomas

    2012-01-01

    We present a simple model of a multi-quark droplet evolution based on the hydrodynamical description. This model includes collective expansion of the droplet, effects of the vacuum pressure and surface tension. The hadron emission from the droplet is described following Weisskopf's statistical...

  20. Chinese physicists and the CERN group invited to visit China in September 1975

    CERN Multimedia

    CERN PhotoLab

    1975-01-01

    At the National Peoples Congress Palace in Peking, Wu Lein-Fu (centre) between W. Weisskopf and W. Jentschke. Second from left Mrs Weisskopf. At the back (centre) L. Van Hove, Mrs Van Hove, G. Charpak. See CERN Courier of October 1975.

  1. An analytical model of anisotropic low-field electron mobility in wurtzite indium nitride

    International Nuclear Information System (INIS)

    Wang, Shulong; Liu, Hongxia; Song, Xin; Guo, Yulong; Yang, Zhaonian

    2014-01-01

    This paper presents a theoretical analysis of anisotropic transport properties and develops an anisotropic low-field electron analytical mobility model for wurtzite indium nitride (InN). For the different effective masses in the Γ-A and Γ-M directions of the lowest valley, both the transient and steady state transport behaviors of wurtzite InN show different transport characteristics in the two directions. From the relationship between velocity and electric field, the difference is more obvious when the electric field is low in the two directions. To make an accurate description of the anisotropic transport properties under low field, for the first time, we present an analytical model of anisotropic low-field electron mobility in wurtzite InN. The effects of different ionized impurity scattering models on the low-field mobility calculated by Monte Carlo method (Conwell-Weisskopf and Brooks-Herring method) are also considered. (orig.)

  2. Statistical Model Analysis of (n, α Cross Sections for 4.0-6.5 MeV Neutrons

    Directory of Open Access Journals (Sweden)

    Khuukhenkhuu G.

    2016-01-01

    Full Text Available The statistical model based on the Weisskopf-Ewing theory and constant nuclear temperature approximation is used for systematical analysis of the 4.0-6.5 MeV neutron induced (n, α reaction cross sections. The α-clusterization effect was considered in the (n, α cross sections. A certain dependence of the (n, α cross sections on the relative neutron excess parameter of the target nuclei was observed. The systematic regularity of the (n, α cross sections behaviour is useful to estimate the same reaction cross sections for unstable isotopes. The results of our analysis can be used for nuclear astrophysical calculations such as helium burning and possible branching in the s-process.

  3. MODESTY, Statistical Reaction Cross-Sections and Particle Spectra in Decay Chain

    International Nuclear Information System (INIS)

    Mattes, W.

    1977-01-01

    1 - Nature of the physical problem solved: Code MODESTY calculates all energetically possible reaction cross sections and particle spectra within a nuclear decay chain. 2 - Method of solution: It is based on the statistical nuclear model following the method of Uhl (reference 1) where the optical model is used in the calculation of partial widths and the Blatt-Weisskopf single particle model for gamma rays

  4. Constituent quarks as clusters in quark-gluon-parton model. [Total cross sections, probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1976-12-01

    We present a quark-gluon-parton model in which quark-partons and gluons make clusters corresponding to two or three constituent quarks (or anti-quarks) in the meson or in the baryon, respectively. We explicitly construct the constituent quark state (cluster), by employing the Kuti-Weisskopf theory and by requiring the scaling. The quark additivity of the hadronic total cross sections and the quark counting rules on the threshold powers of various distributions are satisfied. For small x (Feynman fraction), it is shown that the constituent quarks and quark-partons have quite different probability distributions. We apply our model to hadron-hadron inclusive reactions, and clarify that the fragmentation and the diffractive processes relate to the constituent quark distributions, while the processes in or near the central region are controlled by the quark-partons. Our model gives the reasonable interpretation for the experimental data and much improves the usual ''constituent interchange model'' result near and in the central region (x asymptotically equals x sub(T) asymptotically equals 0).

  5. Electromagnetic decay widths for L=1, Jsup(PC)=1-- T-baryonia: II

    International Nuclear Information System (INIS)

    Ellis, R.G.; McKellar, B.H.J.; Joshi, G.C.

    1981-01-01

    The electromagnetic decay widths of the Jsup(PC)=1 -- , L=1 T-baryonia in the 1-5 GeV region are estimated. The Van Royen-Weisskopf technique is extended to baryonia within the framework of the QCD potential model. The diquark and antidiquark are assumed to have finite extent. Potential dependent coefficients are scaled from known baryon and mesons

  6. Does proton decay follow the exponential law

    International Nuclear Information System (INIS)

    Sanchez-Gomez, J.L.; Alvarez-Estrada, R.F.; Fernandez, L.A.

    1984-01-01

    In this paper, we discuss the exponential law for proton decay. By using a simple model based upon SU(5)GUT and the current theories of hadron structure, we explicitely show that the corrections to the Wigner-Weisskopf approximation are quite negligible for present day protons, so that their eventual decay should follow the exponential law. Previous works are critically analyzed. (orig.)

  7. Spallation Neutron Emission Spectra in Some Amphoter Target Nuclei by Proton Beam Up to 140 MeV Energy

    International Nuclear Information System (INIS)

    Yildirim, G.

    2008-01-01

    In the present study, the (p,xn) reaction neutron-emission spectra for some amphoter target nuclei as 27 A l, 64 Z n, 120 S n, and 208 P b were investigated up to 140 MeV incident proton energy. The pre-equilibrium calculations were calculated by using the hybrid model, the geometry dependent hybrid model, the full exciton model and the cascade exciton model. The reaction equilibrium component was calculated with a traditional compound nucleus model developed by Weisskopf Ewing. Calculation results have been discussed and compared with the available experimental data in literature

  8. Pre-Equilibrium Cluster Emission with Pickup and Knockout

    International Nuclear Information System (INIS)

    Betak, E.

    2005-01-01

    We present a generalization of the Iwamoto-Harada-Bisplinghoff pre-equilibrium model of light cluster formation and emission, which is enhanced by allowing for possible admixtures of knockout for strongly coupled ejectiles, like α's. The model is able to attain the Weisskopf-Ewing formula for compound-nucleus decay at long-time limit; it keeps the philosophy of pre-equilibrium decay during the equilibration stage and it describes the initial phase of a reaction as direct process(es) expressed using the language of the exciton model

  9. Prediction of exotic deformations in the generalized differential equation model for B (E2)↑ and E2

    International Nuclear Information System (INIS)

    Nayak, R.C.; Pattnaik, S.

    2015-01-01

    The two physical quantities namely, the reduced electric quadrupole transition probability B (E2)↑ for the transitions from the ground state to the first 2 + state and the corresponding excitation energy E2 of even-even nuclei play very decisive role in identifying occurrences of increased collectivity. The resulting quadrupole deformation parameters β 2 and the ratio of β 2 to the Weisskopf single-particle β 2 (sp) derived from them significantly help in this regard. Hence the study of these two physical quantities B (E2)↑ and E2 has been under constant investigation both by experimentalists and theorists. In this regard our recently developed differential equation model for B (E2)↑ and E2 can be exploited for possible existence of exotic deformations in the exotic regions of the nuclear chart

  10. Anomalous nuclear enhancement of inclusive spectra at large transverse momentum

    International Nuclear Information System (INIS)

    Krzywicki, Andre.

    1976-01-01

    A parton model interpretation of the anomalous nuclear enhancement of inclusive spectra, observed by Cronin et al is proposed. It seems that the picture representing a nucleus as a collection of quasi-free nucleons in slow relative motion is incorrect when the nucleus is probed during a very short time. This conjecture rests on an extension to nuclei of the Kuti and Weisskopf parton model. A list of observable predictions concerning both hadronic and leptonic interactions with nuclei is given [fr

  11. Isoscaling parameter in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, S.; Chaudhuri, G.

    2012-01-01

    The multifragmentation stage is studied by the Canonical Thermodynamical Model which is based on equilibrium statistical mechanics and involves the calculation of partition functions. The decay of excited fragments produced after multifragmentation stage is calculated by evaporation model based on Weisskopf's formalism. To study the temperature and symmetry energy dependence of α, the study has taken the dissociating systems as A 1 = 168, Z 1 = 75 and A 2 = 186, Z 2 = 75. These will represent 112 Sn + 112 Sn and 124 Sn + 124 Sn central collisions after pre-equilibrium particles are emitted

  12. Various decays of some hadronic systems in constituent quark models

    International Nuclear Information System (INIS)

    Bonnaz, R.

    2001-09-01

    The topic of this study is the decay of mesons in constituent quark models. Those models as well as the various quark-antiquark interaction potentials are presented. Strong decay of a meson into two or three mesons is studied in the second part. The original 3 P o model is presented as well as the research of a vertex function γ(p) depending on the momentum for the created qq-bar pair. We show that a function γ(p) of constant+Gaussian type is superior than the constant usually used. The second part is dedicated to electromagnetic transitions studied through the emission of a real or a virtual photon. In the case of real photon emission, the different approximations found in the literature are reviewed and compared to the formalism going beyond the long wave length approximation. Mixing angles are tested for some mesons. In the case of virtual photon, the expression of decay width obtained by van Royen and Weisskopf is re-demonstrated and then improved by taking into account the quark momentum distribution inside the meson. An electromagnetic dressing of quarks is introduced that improves the results. All along this study, wave functions of various sophistication degrees are used. The results of decay widths are compared to a large bulk of experimental data. (author)

  13. Current quarks and constituent-classification quarks: some questions and ideas

    International Nuclear Information System (INIS)

    Close, F.E.

    1977-01-01

    A brief introduction is given to the spin dependence of inelastic photo and electroproduction. Parton model predictions of Kuti and Weisskopf are then criticized and a paradox noted in connection with a sum rule of Bjorken. The resolution of this paradox raises several questions concerning the constituent and current quark approaches to resonance excitation. Particular attention is given to current algebra constraints, angular momentum in the nucleon, the x → 1 behavior of inelastic electroproduction, psi production and radiative decays

  14. Investigation of the neutron emission spectra of some deformed nuclei for (n, xn) reactions up to 26 MeV energy

    International Nuclear Information System (INIS)

    Kaplan, A.; Bueyuekuslu, H.; Tel, E.; Aydin, A.; Boeluekdemir, M.H.

    2011-01-01

    In this study, neutron-emission spectra produced by (n, xn) reactions up to 26 MeV for some deformed target nuclei as 165 Ho, 181 Ta, 184 W, 232 Th and 238 U have been investigated. Also, the mean free path parameter's effect for 9n, xn) neutron-emission spectra has been examined. In the calculations, pre-equilibrium neutron-emission spectra have been calculated by using new evaluated hybrid model and geometry dependent hybrid model, full exciton model and cascade exciton model. The reaction equilibrium component has been calculated by Weisskopf-Ewing model. The obtained results have been discussed and compared with the available experimental data and found agreement with each other. (author)

  15. Emission of light particles associated with a high transverse momentum proton in the reaction sup 16 O + sup 27 Al at 94 MeV/u. Emission de particules legeres associees a un proton de grand moment transverse dans la reaction sup 16 O + sup 27 Al a 94 MeV/u (E75)

    Energy Technology Data Exchange (ETDEWEB)

    Badala, A.; Barbera, R.; Palmeri, A.; Pappalardo, G.S.; Schillaci, A. (Istituto Nazionale di Fisica Nucleare, Catania (IT)); Bizard, G.; Bougault, R.; Durand, D.; Genoux-Lubain, A.; Lefebvres, F.; Patry, J.P. (Institut des Sciences de la Matiere du Rayonnement, 14 - Caen (FR)); Jin, G.; Laville, J.L.; Rosato, E. (Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (FR))

    1989-06-01

    The emission of light particles associated with a high transverse momentum proton in the reaction {sup 16}O + {sup 27}Al at 94 MeV/u has been studied with the help of the GANIL multidetectors (MUR and TONNEAU). Data are confronted with a model based on the standard high-energy participant-spectator picture coupled with the Weisskopf theory of evaporation. Reasonable agreement is achieved indicating that the mean-field effects for this light system at such a rather high incident energy are negligible.

  16. Remind of the classification of some known isomeric transitions; Rappel de la classification des transitions isomeriques connues

    Energy Technology Data Exchange (ETDEWEB)

    Ballini, R; Levi, C; Papineau, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1953-07-01

    Relations of Berthelot (model of the liquid drop) and of Weisskopf (odd core to 'single' proton), between l'energie of a transition and her 'partial period' of emission of a {alpha} photon, are represented by easily usable diagrams. These relations are compared to the empiric relations for a certain number of groups of transitions. With regard to the M4 transitions, a rectification is given to the mode of treatment of the experimental results who had been proposed by Goldhaber and SUNYAR (1951). A test of systematic has been done, concerning the energy of isomeric transitions (grouping around some energies, variation of the nature, and of the energy of the transitions with Z, N and A). (author) [French] Les relations de Berthelot (modele de la goutte liquide) et de Weisskopf (noyau impair a proton ''celibataire''), entre l'energie d'une transition et la ''periode partielle'' d'emission d'un photon {alpha}, sont representees par des graphiques facilement utilisables. Ces relations sont comparees aux relations empiriques pour un certain nombre de groupes de transitions. En ce qui concerne les transitions M4, une rectification est donnee au mode de traitement des resultats experimentaux qui avait ete propose par GOLDHABER et SUNYAR (1951). Un essai de systematique a ete effectue, concernant l'energie des transitions isomeriques (groupement autour de certaines energies, variation de la nature et de l'energie des transitions avec Z, N et A). (auteur)

  17. Reaction cross section calculation of some alkaline earth elements

    Science.gov (United States)

    Tel, Eyyup; Kavun, Yusuf; Sarpün, Ismail Hakki

    2017-09-01

    Reaction cross section knowledge is crucial to application nuclear physics such as medical imaging, radiation shielding and material evaluations. Nuclear reaction codes can be used if the experimental data are unavailable or are improbably to be produced because of the experimental trouble. In this study, there action cross sections of some target alkaline earth elements have been calculated by using pre-equilibrium and equilibrium nuclear reaction models for nucleon induced reactions. While these calculations, the Hybrid Model, the Geometry Dependent Hybrid Model, the Full Exciton Model, the Cascade Exciton Model for pre-equilibrium reactions and the Weisskopf-Ewing Model for equilibrium reactions have been used. The calculated cross sections have been discussed and compared with the experimental data taken from Experimental Nuclear Reaction Data library.

  18. Secondary decay of espalation in ADS reactors

    International Nuclear Information System (INIS)

    Rodrigues, Marcos Guedes; Santiago, A.J.; Silva, C.E. da

    2013-01-01

    We study the problem of evaporation in the context of nuclear spallation reactions in nuclear reactors ADS. The calculation was developed based on the theory of Weisskopf evaporation and in the model of thermal liquid drop. Evaporation affects the 'economy' of neutrons and the design of a ADS reactor in various aspects. It offers abundant amount of neutrons in the nuclear medium, with a wide energy range. For an excitation energy of 3 MeV/n a typical core evaporates about 10% of its mass in the form of light particles (mostly neutrons)

  19. New calculations of cyclotron production cross sections of some positron emitting radioisotopes in proton induced reactions

    International Nuclear Information System (INIS)

    Tel, E.; Aydin, E.G.; Kaplan, A.; Aydin, A.

    2009-01-01

    In this study, new calculations on the excitation functions of 13 C(p, n) 13 N, 14 N (p, α) 11 C, 15 N(p, n) 15 O, 16 O(p, α) 13 N, 18 O(p, n) 18 F, 62 Ni(p, n) 62 Cu, 68 Zn(p, n) 68 Ga and 72 Ge(p, n) 72 As reactions have been carried out in the 5-40 MeV incident proton energy range. In these calculations, the pre-equilibrium and equilibrium effects have been investigated. The pre-equilibrium calculations involve hybrid model, geometry dependent hybrid model, the cascade exciton model and full exciton model. Equilibrium effects were calculated according to Weisskopf-Ewing model. The calculated results have been compared with experimental data taken from literature. (author)

  20. CERN loses two former Directors-General

    CERN Multimedia

    2002-01-01

    Victor Weisskopf, a giant of modern physics and Director General of CERN from 1961-65, died on 21 April. The previous month, Willibald Jentschke, Director General from 1971-75 and founder of the DESY Laboratory in Hamburg, passed away.

  1. Reaction cross section calculation of some alkaline earth elements

    Directory of Open Access Journals (Sweden)

    Tel Eyyup

    2017-01-01

    Full Text Available Reaction cross section knowledge is crucial to application nuclear physics such as medical imaging, radiation shielding and material evaluations. Nuclear reaction codes can be used if the experimental data are unavailable or are improbably to be produced because of the experimental trouble. In this study, there action cross sections of some target alkaline earth elements have been calculated by using pre-equilibrium and equilibrium nuclear reaction models for nucleon induced reactions. While these calculations, the Hybrid Model, the Geometry Dependent Hybrid Model, the Full Exciton Model, the Cascade Exciton Model for pre-equilibrium reactions and the Weisskopf-Ewing Model for equilibrium reactions have been used. The calculated cross sections have been discussed and compared with the experimental data taken from Experimental Nuclear Reaction Data library.

  2. At the European Physical Society (EPS) 1979 International Conference on High Energy Physics

    CERN Multimedia

    CERN PhotoLab

    1979-01-01

    To mark CERN's 25th Anniversary this year conference was held in Geneva from 27 June to 4 July, at the International Conference Centre. Here is Abdus Salam addressing theorists (on the first raw from left, Viki Weisskopf, Leon Van Hove, Giuliano Preparata).

  3. CERN's 25th Anniversary

    CERN Multimedia

    CERN PhotoLab

    1979-01-01

    A ceremony in the Main Auditorium on 23 June 1979 to mark the 25th Anniversary of CERN. A photo of the five Director-Generals of CERN, from left to right: John Adams, Willi Jentschke, Felix Bloch, Viki Weisskopf and Leon van Hove.

  4. Nuclear physicist, arms control advocate

    CERN Multimedia

    Chang, K

    2002-01-01

    Victor F. Weisskopf, a nuclear physicist who worked on the Manhattan Project to build the first atomic bomb in World War II and later became an ardent advocate of arms control, died Monday at his home in Newton, MA, USA. He was 93 (1 page).

  5. Strengths of gamma-ray transitions in A = 6–44 nuclei (III)

    NARCIS (Netherlands)

    Endt, P.M.

    The present tables list the strengths (in Weisskopf units) of over 2400 γ-ray transitions in A = 6–44 nuclei, classified according to character (electric or magnetic, multipolarity, isospin forbiddenness). Selected transitions from unbound states are included. The strengths for isovector E1 and M1

  6. George Hampton (1920-2004)

    CERN Multimedia

    2004-01-01

    George Hampton, who died recently, was CERN's Director of Administration in the 1960s and an important member of the team who managed the growth of CERN as it left the construction period and became a world-class physics laboratory. George came to CERN in the 1963, when the laboratory was just starting its main research activities after the intense period of construction for the Proton Synchrotron (PS). At that time the laboratory was passing through a major budget crisis, and the new Director-general, Viki Weisskopf, was faced with a completely new structure set up by his predecessor John Adams, with 12 divisions for running the laboratory reporting directly to him, and with four Directors. With the renewal of the position of Director of Administration in 1963, Weisskopf selected George Hampton from candidates from the Member States. He came from the UK Atomic Energy Authority, but had worked earlier as a delegate to the international Civil Aviation Organization. George's position at the start was to hel...

  7. Physics in the twentieth century. A selection of papers. La physique du XXe siecle. Morceaux choisis

    Energy Technology Data Exchange (ETDEWEB)

    Weisskopf, V F

    1974-01-01

    A number of papers from Victor F. Weisskopf have been collected in this book. The papers included in the first part are dealing with basic concepts in quantum mechanics. Particle-wave duality, quantum scale, and the Niels Bohr works. Papers in the second part describe the recent developments in the physics field during the 20th century: the electron theory, compound nucleus, nuclear structure, and quantum theory of elementary particles. The third part is concerned with peculiar cases: nuclear models, the Lorentz relativistic contraction, light-matter interaction, parity decay, and symmetry. In the fourth part are gathered papers on sciences in general, for which they present a sort of natural philosophy.

  8. Comparison of the Weisskopf estimates in spin and K-isomers

    International Nuclear Information System (INIS)

    Garg, Swati; Maheshwari, B.; Rajput, Rohit; Srivastava, P.C.; Jain, A.K.

    2014-01-01

    Nuclear isomers are the excited metastable states, which exist due to the hindrance on their decay. Study of isomers has recently become very popular due to advances in the experimental techniques and also the arrival of radioactive beams. Large amount of new experimental data is becoming available. The very first 'Atlas of nuclear isomers' lists more than 2460 nuclear isomers with the half-life cut off at 10 ns. Spin isomers mostly exist due to the difficulty in meeting the spin selection rules and cluster around the semi-magic regions. The isomers far from the magic-numbers, which lie in the well-deformed region, mostly exist due to the goodness of the K-quantum number and large K-difference between the decaying states. They are known as K-isomers

  9. Dynamical and many-body correlation effects in the kinetic energy spectra of isotopes produced in nuclear multifragmentation

    Science.gov (United States)

    Souza, S. R.; Donangelo, R.; Lynch, W. G.; Tsang, M. B.

    2018-03-01

    The properties of the kinetic energy spectra of light isotopes produced in the breakup of a nuclear source and during the de-excitation of its products are examined. The initial stage, at which the hot fragments are created, is modeled by the statistical multifragmentation model, whereas the Weisskopf-Ewing evaporation treatment is adopted to describe the subsequent fragment de-excitation, as they follow their classical trajectories dictated by the Coulomb repulsion among them. The energy spectra obtained are compared to available experimental data. The influence of the fusion cross section entering into the evaporation treatment is investigated and its influence on the qualitative aspects of the energy spectra turns out to be small. Although these aspects can be fairly well described by the model, the underlying physics associated with the quantitative discrepancies remains to be understood.

  10. Fission modelling with FIFRELIN

    International Nuclear Information System (INIS)

    Litaize, Olivier; Serot, Olivier; Berge, Leonie

    2015-01-01

    The nuclear fission process gives rise to the formation of fission fragments and emission of particles (n,γ, e - ). The particle emission from fragments can be prompt and delayed. We present here the methods used in the FIFRELIN code, which simulates the prompt component of the de-excitation process. The methods are based on phenomenological models associated with macroscopic and/or microscopic ingredients. Input data can be provided by experiment as well as by theory. The fission fragment de-excitation can be performed within Weisskopf (uncoupled neutron and gamma emission) or a Hauser-Feshbach (coupled neutron/gamma emission) statistical theory. We usually consider five free parameters that cannot be provided by theory or experiments in order to describe the initial distributions required by the code. In a first step this set of parameters is chosen to reproduce a very limited set of target observables. In a second step we can increase the statistics to predict all other fission observables such as prompt neutron, gamma and conversion electron spectra but also their distributions as a function of any kind of parameters such as, for instance, the neutron, gamma and electron number distributions, the average prompt neutron multiplicity as a function of fission fragment mass, charge or kinetic energy, and so on. Several results related to different fissioning systems are presented in this work. The goal in the next decade will be i) to replace some macroscopic ingredients or phenomenological models by microscopic calculations when available and reliable, ii) to be a support for experimentalists in the design of detection systems or in the prediction of necessary beam time or count rates with associated statistics when measuring fragments and emitted particle in coincidence iii) extend the model to be able to run a calculation when no experimental input data are available, iv) account for multiple chance fission and gamma emission before fission, v) account for the

  11. Alan Guth and Andrei Linde win international cosmology award

    CERN Multimedia

    2004-01-01

    "Leading theoretical cosmologists Alan Guth, Weisskopf Professor of Physics at the Massachusetts Institute of Technology, and Andrei Linde, Professor of Physics at Stanford University, who played prominent roles in developing and refining the theory of cosmic inflation, have been selected by an international panel of experts to receive the 2004 Cosmology Prize of the Peter Gruber Foundation" (1 page).

  12. Reflections on the History of Science and Technology in Austria

    International Nuclear Information System (INIS)

    Broda, E.

    1972-01-01

    This text was written for a talk given by E. Broda in Vienna on the symposium “The future of Science a Technology” held within the framework of the Austrian National Day in Vienna in 1972 and it addresses amongst other Victor Weisskopf. The text is about reflections on the history of science and technology in Austria. (nowak)

  13. Open problems in formation and decay of composite systems in ...

    Indian Academy of Sciences (India)

    Entrance channel effects in the population of giant dipole resonance states ... sensitive to the evaporation from the last steps in the decay chain, where the .... In fact, following Blatt and Weisskopf [18] and Thomas [19], the rate R Üde for the .... A natural way to study temperature effects on the DGDR would be the use of ...

  14. Physics in the twentieth century. A selection of papers

    International Nuclear Information System (INIS)

    Weisskopf, V.F.

    1974-01-01

    A number of papers from Victor F. Weisskopf have been collected in this book. The papers included in the first part are dealing with basic concepts in quantum mechanics. Particle-wave duality, quantum scale, and the Niels Bohr works. Papers in the second part describe the recent developments in the physics field during the 20th century: the electron theory, compound nucleus, nuclear structure, and quantum theory of elementary particles. The third part is concerned with peculiar cases: nuclear models, the Lorentz relativistic contraction, light-matter interaction, parity decay, and symmetry. In the fourth part are gathered papers on sciences in general, for which they present a sort of natural philosophy [fr

  15. CERN physics, past and future

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    A summary is given of a talk presented by Victor Weisskopf to introduce the proceedings on CERN day at the European Physical Society's 1979 International Conference on High Energy Physics, held in Geneva. The significance of results from the experimental program at CERN are discussed. Some speculations on future discoveries and developments, as envisaged by Bjorn Wiik, are also presented. (W.D.L.).

  16. Calculation of photonuclear process in the region of several tens MeV. Formulation of exact transition rate for high energy γ-ray

    International Nuclear Information System (INIS)

    Wada, Hiroaki; Harada, Hideo

    1999-01-01

    The electromagnetic field approximated by using long wave-length limit is not valid for heavy nuclear mass or high energy γ-ray transition. To examine the contribution of the electric multipole field that is neglected in long wave-length limit, we formulize the El transition rate for the strict electric multipole field and compare quantitatively this result with Weisskopf estimate. (author)

  17. Festschrift Charpak (Georges) on his 65th birthday

    CERN Document Server

    Lehmann, Pierre; Rubbia, Carlo; Saudinos, Jean; CERN. Geneva

    1989-01-01

    On the occasion of the 65th birthday of Georges Charpak and of his retirement, the Director-General and the EP Division invite you to a symposium in his h. Chairman : P. Lehmann. - Opening address : C. Rubbia. - Message from V. Weisskopf. - L. Lederman : Superstrings needs sealing wax. - J. Saudinos : Quelques applications des detecteurs gazeux a la medecine et a la biologie.

  18. Systematics of Absolute Gamma Ray Transition Probabilities in Deformed Odd-A Nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Malmskog, S G

    1965-11-15

    All known experimentally determined absolute gamma ray transition probabilities between different intrinsic states of deformed odd-A nuclei in the rare earth, region (153 < A < 181) and in the actinide region (A {>=} 227) are compared with transition probabilities (Weisskopf and Nilsson estimate). Systematic deviations from the theoretical values are found. Possible explanations for these deviations are given. This discussion includes Coriolis coupling, {delta}K ={+-}2 band-mixing effects and pairing interaction.

  19. Happy Birthday Viki

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1988-12-15

    The CERN auditorium was packed on the 19th and 20th of September when friends and colleagues of Viktor Weisskopf gathered to celebrate the 80th birthday one of the best known and most admired personalities in the particle physics community. A galaxy of scientists was lined up to speak at an international colloquium on 'Science, Culture and Peace', organized by CERN and the Ettore Majorana Centre for Scientific Culture.

  20. Penetration effect in internal conversion and nuclear structure

    International Nuclear Information System (INIS)

    Listengarten, M.A.

    1978-01-01

    The conditions for the appearance of the anomalous internal conversion coefficients (ICC) are considered, when the contribution of the penetration matrix element (PME) is of the order of or larger than the main part of the conversion matrix element. The experimental magnitudes of the nuclear PME agree well with those calculated in the framework of simple nuclear models, provided the magnitude of PME is not decreased due to the model -dependent selection rules. The magnitude of the anomaly ( lanbda parameter ) is compared with the exclusion factor of γ-transition relative to the Weisskopf estimation. The better is the model of the nucleus the weaker is the dependence of the lambda magnitude on the exclusion factor. ICC coefficients might be anomalous for those γ-transitions for which the exclusion factor calculated in the framework of more rigorous model are of the order of unity. In the ''ideal'' model of nucleus completely adequate to the true nuclear structure the dependence of the lambda penetration parameter on the exclusion factor vanishes

  1. Energy spectra of fast neutrons by nuclear emulsion method

    International Nuclear Information System (INIS)

    Quaresma, A.A.

    1977-01-01

    An experimental method which uses nuclear emulsion plates to determine the energy spectrum of fission neutrons is described. By using this technique, we have obtained the energy distribution of neutrons from spontaneous fission of Cf 2 5 2 . The results are in good agreement with whose obtained previously by others authors who have used different detection techniques, and they are consistent with a Maxwellian distribution as expected by Weisskopf's nuclear evaporation theory. (author)

  2. Happy Birthday Viki

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The CERN auditorium was packed on the 19th and 20th of September when friends and colleagues of Viktor Weisskopf gathered to celebrate the 80th birthday one of the best known and most admired personalities in the particle physics community. A galaxy of scientists was lined up to speak at an international colloquium on 'Science, Culture and Peace', organized by CERN and the Ettore Majorana Centre for Scientific Culture

  3. Energy spectrum of 208Pb(n,x) reactions

    Science.gov (United States)

    Tel, E.; Kavun, Y.; Özdoǧan, H.; Kaplan, A.

    2018-02-01

    Fission and fusion reactor technologies have been investigated since 1950's on the world. For reactor technology, fission and fusion reaction investigations are play important role for improve new generation technologies. Especially, neutron reaction studies have an important place in the development of nuclear materials. So neutron effects on materials should study as theoretically and experimentally for improve reactor design. For this reason, Nuclear reaction codes are very useful tools when experimental data are unavailable. For such circumstances scientists created many nuclear reaction codes such as ALICE/ASH, CEM95, PCROSS, TALYS, GEANT, FLUKA. In this study we used ALICE/ASH, PCROSS and CEM95 codes for energy spectrum calculation of outgoing particles from Pb bombardment by neutron. While Weisskopf-Ewing model has been used for the equilibrium process in the calculations, full exciton, hybrid and geometry dependent hybrid nuclear reaction models have been used for the pre-equilibrium process. The calculated results have been discussed and compared with the experimental data taken from EXFOR.

  4. From ISOLDE to ISOLDE 2

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    On December 17, 1964 the CERN DG, Victor Weisskopf, sent a letter to the ISOLDE Collaboration giving green light to perform experiments at CERN. An underground hall was built close to the SC, which accelerated protons to 600MeV energy. The first experiment at ISOLDE was preformed on September 17, 1967. In my talk I shall cover the first years of experiments at ISOLDE at CERN, the SC Improvement Programme and the building of ISOLDE 2.

  5. The 1979 Bernard Gregory lectures

    International Nuclear Information System (INIS)

    Weisskopf, V.F.

    1980-02-01

    This volume contains the texts of the lectures given by Professor V.F. Weisskopf at CERN and in Paris in the autumn of 1979, as the first Gregory lecturer. The titles of the three different texts are 'Growing up with Field Theory', 'Recent Trends in Particle Physics' and 'L'Art et la Science'. While the latter lecture was given in French, an English text here follows the French one. The volume starts with a short biographical note about Bernard Gregory. (orig.)

  6. Excited states in stochastic electrodynamics

    International Nuclear Information System (INIS)

    Franca, H.M.; Marshall, T.W.

    1987-12-01

    It is shown that the set of Wigner functions associated with the excited states of the harmonic oscillator constitute a complete set of functions over the phase space. An arbitraty distribution can be expanded in terms of these Wigner functions. By studying the time evolution, according to Stochastic Electrodynamics, of the expansion coefficients, becomes feasible to separate explicity the contributionsof the radiative reaction and the vaccuum field to the Einsten. A coefficients for this system. A simple semiclassical explanation of the Weisskopf-Heitler phenomenon in resonance fluorescence is also supplied. (author) [pt

  7. Low-rank driving in quantum systems

    International Nuclear Information System (INIS)

    Burkey, R.S.

    1989-01-01

    A new property of quantum systems called low-rank driving is introduced. Numerous simplifications in the solution of the time-dependent Schroedinger equation are pointed out for systems having this property. These simplifications are in the areas of finding eigenvalues, taking the Laplace transform, converting Schroedinger's equation to an integral form, discretizing the continuum, generalizing the Weisskopf-Wigner approximation, band-diagonalizing the Hamiltonian, finding new exact solutions to Schroedinger's equation, and so forth. The principal physical application considered is the phenomenon of coherent populations-trapping in continuum-continuum interactions

  8. The significance of Cern

    CERN Multimedia

    Weisskopf,V

    Le Prof. V.Weisskopf, DG du Cern de 1961 à 1965, est né à Vienne, a fait ses études à Göttingen et a une carrière académique particulièrement riche. Il a travaillé à Berlin, Copenhague et Berlin et est parti aux Etats Unis pour participer au projet Manhattan et était Prof. au MTT jusqu'à 1960. Revenu en Europe, il a été DG du Cern et lui a donné l'impulsion que l'on sait.

  9. Relativistic duality, and relativistic and radiative corrections for heavy-quark systems

    International Nuclear Information System (INIS)

    Durand, B.; Durand, L.

    1982-01-01

    We give a JWKB proof of a relativistic duality relation which relates an appropriate energy average of the physical cross section for e + e - →qq-bar bound states→hadrons to the same energy average of the perturbative cross section for e + e - →qq-bar. We show that the duality relation can be used effectively to estimate relativistic and radiative corrections for bound-quark systems to order α/sub s//sup ts2/. We also present a formula which relates the square of the ''large'' 3 S 1 Salpeter-Bethe-Schwinger wave function for zero space-time separation of the quarks to the square of the nonrelativistic Schroedinger wave function at the origin for an effective potential which reproduces the relativistic spectrum. This formula allows one to use the nonrelativistic wave functions obtained in potential models fitted to the psi and UPSILON spectra to calculate relativistic leptonic widths for qq-bar states via a relativistic version of the van Royen--Weisskopf formula

  10. Observation of a new high-spin isomer in 94Pd

    International Nuclear Information System (INIS)

    Brock, T. S.; Nara Singh, B. S.; Wadsworth, R.; Boutachkov, P.; Gorska, M.; Grawe, H.; Pietri, S.; Domingo-Pardo, C.; Caceres, L.; Engert, T.; Farinon, F.; Gerl, J.; Goel, N.; Kojuharov, I.; Kurz, N.; Nociforo, C.; Prochazka, A.; Schaffner, H.; Weick, H.; Braun, N.

    2010-01-01

    A second γ-decaying high-spin isomeric state, with a half-life of 197(22)ns, has been identified in the N=Z+2 nuclide 94 Pd as part of a stopped-beam Rare Isotope Spectroscopic INvestigation at GSI (RISING) experiment. Weisskopf estimates were used to establish a tentative spin/parity of 19 - , corresponding to the maximum possible spin of a negative parity state in the restricted (p 1/2 , g 9/2 ) model space of empirical shell model calculations. The reproduction of the E3 decay properties of the isomer required an extension of the model space to include the f 5/2 and p 3/2 orbitals using the CD-Bonn potential. This is the first time that such an extension has been required for a high-spin isomer in the vicinity of 100 Sn and reveals the importance of such orbits for understanding the decay properties of high-spin isomers in this region. However, despite the need for the extended model space for the E3 decay, the dominant configuration for the 19 - state remains (πp 1/2 -1 g 9/2 -3 ) 11 x (νg 9/2 -2 ) 8 . The half-life of the known, 14 + , isomer was remeasured and yielded a value of 499(13) ns.

  11. Theoretical and experimental study on broadband terahertz atmospheric transmission characteristics

    International Nuclear Information System (INIS)

    Guo Shi-Bei; Zhong Kai; Wang Mao-Rong; Liu Chu; Xu De-Gang; Yao Jian-Quan; Xiao Yong; Wang Wen-Peng

    2017-01-01

    Broadband terahertz (THz) atmospheric transmission characteristics from 0 to 8 THz are theoretically simulated based on a standard Van Vleck–Weisskopf line shape, considering 1696 water absorption lines and 298 oxygen absorption lines. The influences of humidity, temperature, and pressure on the THz atmospheric absorption are analyzed and experimentally verified with a Fourier transform infrared spectrometer (FTIR) system, showing good consistency. The investigation and evaluation on high-frequency atmospheric windows are good supplements to existing data in the low-frequency range and lay the foundation for aircraft-based high-altitude applications of THz communication and radar. (paper)

  12. Wolfgang Pauli Room

    CERN Multimedia

    Bennett, Sophia Elizabeth

    2017-01-01

    This small but historically valuable collection was donated by Pauli’s widow who, with the help of friends including his former assistants Charles Enz and Victor Weisskopf, gathered together Pauli’s manuscripts and notes, and tracked down originals or copies of his many letters. His correspondence with Bohr, Heisenberg Einstein and others, discussing many of the new ideas in physics, has been published (link) and provides an invaluable resource for those interested in studying the development of 20th century science. Unlike the main CERN Archive, most items in the Pauli collection have been digitised and are available online.

  13. Evolution of wave function in a dissipative system

    Science.gov (United States)

    Yu, Li-Hua; Sun, Chang-Pu

    1994-01-01

    For a dissipative system with Ohmic friction, we obtain a simple and exact solution for the wave function of the system plus the bath. It is described by the direct product in two independent Hilbert space. One of them is described by an effective Hamiltonian, the other represents the effect of the bath, i.e., the Brownian motion, thus clarifying the structure of the wave function of the system whose energy is dissipated by its interaction with the bath. No path integral technology is needed in this treatment. The derivation of the Weisskopf-Wigner line width theory follows easily.

  14. Study of fragmentation reactions of light nucleus

    International Nuclear Information System (INIS)

    Toneli, David Arruda; Carlson, Brett Vern

    2011-01-01

    Full text: The decay of the compound nucleus is traditionally calculated using a sequential emission model, such as the Weisskopf-Ewing or Hauser-Feshbach ones, in which the compound nucleus decays through a series of residual nuclei by emitting one particle at a time until there is no longer sufficient energy for further emission. In light compound nucleus, however, the excitation energy necessary to fully disintegrate the system is relatively easy to attain. In such cases, decay by simultaneous emission of two or more particles becomes important. A model which takes into account all these decay is the Fermi fragmentation model. Recently, the equivalence between the Fermi fragmentation model and statistical multifragmentation model used to describe the decay for highly excited fragments for reactions of heavy ions was demonstrated. Due the simplicity of the thermodynamic treatment used in the multifragmentation model, we have adapted it to the calculation of Fermi breakup of light nuclei. The ultimate goal of this study is to calculate the distribution of isotopes produced in proton-induced reactions on light nuclei of biological interest, such as C, O e Ca. Although most of these residual nuclei possess extremely short half-lives and thus represent little long-term danger, they tend to be deficient in neutrons and to decay by positron emission, which allows the monitoring of proton radiotherapy by PET (Positron Emission Tomography). (author)

  15. Precision hfs of 126Cs(T1/2=1.63 m) by ABMR

    International Nuclear Information System (INIS)

    Pinard, J.; Duong, H.T.; Marescaux, D.; Stroke, H.H.; Redi, O.; Gustafsson, M.; Nilsson, T.; Matsuki, S.; Kishimoto, Y.; Kominato, K.; Ogawa, I.; Shibata, M.; Tada, M.; Persson, J.R.; Nojiri, Y.; Momota, S.; Inamura, T.T.; Wakasugi, M.; Juncar, P.; Murayama, T.; Nomura, T.; Koizumi, M.

    2005-01-01

    The hfs separation Δν of 126 Cs(T1/2=1.63 m) in the 6s S1/22 ground state was obtained in a precision measurement near zero magnetic field by means of atomic beam magnetic resonance with laser optical pumping on-line with the CERN-PSB-ISOLDE mass separator. The result, Δν=3629.514 (0.001) MHz, corrects significantly a previous published value from a high-field experiment. With our result, the precision of the nuclear magnetic moment, μ(Cs126)∼0.776μN, is now limited by the influence of extended nuclear structure on the hfs (the Bohr-Weisskopf effect)

  16. Physics for Mathematicians

    Science.gov (United States)

    Ulam, S. M.

    2014-11-01

    When I was asked to give a talk here, being just a mathematician among the distinguished array of physicists invited to speak, I had great hesitation. Then it occurred to me, if Viki Weisskopf can conduct a symphony orchestra, maybe I can talk about physics. I felt consoled until yesterday evening when I discovered that he is a professional, and so I feel very, hesitant again. My title, "Physics for Mathematicians", will almost mean physics without mathematics. My interest is really to paraphrase a famous statement, not what mathematics can do for physics, but what physics can do for mathematics. That is the underlying motive...

  17. Future perspectives

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    International involvement in particle physics is what the International Committee for Future Accelerators (ICFA) is all about. At the latest Future Perspectives meeting at Brookhaven from 5-10 October (after a keynote speech by doyen Viktor Weisskopf, who regretted the emergence of 'a nationalistic trend'), ICFA reviewed progress and examined its commitments in the light of the evolving world particle physics scene. Particular aims were to review worldwide accelerator achievements and plans, to survey the work of the four panels, and to discuss ICFA's special role in future cooperation in accelerator construction and use, and in research and development work for both accelerators and for detectors

  18. International School of Subnuclear Physics 50th Course

    CERN Document Server

    What we would like LHC to give us; ISSP 2012

    2014-01-01

    This book is the proceedings of the International School of Subnuclear Physics, ISSP 2012, 50th Course — ERICE, 23 June 2013 — 2 July 2012. This course was devoted to the celebrations of the 50th Anniversary of the Subnuclear Physics School which was started in 1961 by Antonino Zichichi with John Bell at CERN and formally established in 1962 by Bell, Blackett, Weisskopf, Rabi and Zichichi in Geneva (CERN). The lectures covered the latest and most significant achievements in theoretical and in experimental subnuclear physics. Readership: Directed to experts and advanced-level students in the field of Theoretical and Experimental Subnuclear Physics.

  19. Halo-induced large enhancement of soft dipole excitation of 11Li observed via proton inelastic scattering

    Directory of Open Access Journals (Sweden)

    J. Tanaka

    2017-11-01

    Full Text Available Proton inelastic scattering off a neutron halo nucleus, 11Li, has been studied in inverse kinematics at the IRIS facility at TRIUMF. The aim was to establish a soft dipole resonance and to obtain its dipole strength. Using a high quality 66 MeV 11Li beam, a strongly populated excited state in 11Li was observed at Ex=0.80±0.02 MeV with a width of Γ=1.15±0.06 MeV. A DWBA (distorted-wave Born approximation analysis of the measured differential cross section with isoscalar macroscopic form factors leads us to conclude that this observed state is excited in an electric dipole (E1 transition. Under the assumption of isoscalar E1 transitions, the strength is evaluated to be extremely large amounting to 30∼296 Weisskopf units, exhausting 2.2%∼21% of the isoscalar E1 energy-weighted sum rule (EWSR value. The large observed strength originates from the halo and is consistent with the simple di-neutron model of 11Li halo.

  20. Study on (n,2n) and (n,p) reactions of strontium nucleus

    Energy Technology Data Exchange (ETDEWEB)

    Yiğit, Mustafa, E-mail: mustafayigit@aksaray.edu.tr [Aksaray University, Department of Physics, Faculty of Arts and Science, Aksaray (Turkey); Tel, Eyyup [Osmaniye Korkut Ata University, Department of Physics, Faculty of Arts and Science, Osmaniye (Turkey)

    2015-11-15

    Highlights: • The cross sections for (n,2n) and (n,p) nuclear reactions on {sup 84,86,88,90}Sr target nuclei have been investigated. • The codes ALICE/ASH, PCROSS, CEM03.01, and cross section systematics are carried out in the calculations. • Cross section calculations are given at projectile energy range from the threshold up to 30 MeV. • Obtained results are compared with each other, experimental data, and ENDF/B-VII.1 and TENDL-2014 libraries. - Abstract: The cross sections for (n,2n) and (n,p) nuclear reactions on {sup 84,86,88,90}Sr target nuclei up to 30 MeV from threshold have been investigated. The determination of the cross sections has been done employing the codes ALICE/ASH, PCROSS, CEM03.01 which taking into consideration compound and precompound emissions. Calculations have been performed by using the Weisskopf Ewing model (WEM) of compound reaction mechanism, and the Hybrid model (HM), Geometry Dependent Hybrid model (GDHM) and Full Exciton model (FEM) of precompound reaction mechanism, and Cascade Exciton model (CEM) including Cascade interactions, and the empirical and semi-emipirical systematics. In order to test the theoretical nuclear models, the data obtained from the excitation function calculations are discussed and compared with available experimental values, ENDF/B-VII.1 and TENDL-2014 libraries. Finally the new cross section results calculated in the present paper may be useful for nuclear energy applications.

  1. Fragmentation of two-phonon {gamma}-vibrational strength in deformed nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Wu, C.Y.; Cline, D. [Univ. of Rochester, NY (United States)

    1996-12-31

    Rotational and vibrational modes of collective motion. are very useful in classifying the low-lying excited states in deformed nuclei. The rotational mode of collective motion is characterized by rotational bands having correlated level energies and strongly-enhanced E2 matrix elements. The lowest intrinsic excitation with I,K{sup {pi}} = 2,2{sup +} in even-even deformed nuclei, typically occurring at {approx}1 MeV, is classified as a one-phonon {gamma}-vibration state. In a pure harmonic vibration limit, the expected two-phonon {gamma}-vibration states with I,K{sup {pi}} = 0,0{sup +} and 4,4{sup +} should have excitation energies at twice that of the I,K{sup {pi}} = 2,2{sup +} excitation, i.e. {approx}2 MeV, which usually is above the pairing gap leading to possible mixing with two-quasiparticle configurations. Therefore, the question of the localization of two-phonon {gamma}-vibration strength has been raised because mixing may lead to fragmentation of the two-phonon strength over a range of excitation energy. For several well-deformed nuclei, an assignment of I,K{sup {pi}}=4,4{sup +} states as being two-phonon vibrational excitations has been suggested based on the excitation energies and the predominant {gamma}-ray decay to the I,K{sup {pi}}=2,2{sup +} state. However, absolute B(E2) values connecting the presumed two- and one-phonon states are the only unambiguous measure of double phonon excitation. Such B(E2) data are available for {sup 156}Gd, {sup 160}Dy, {sup 168}Er, {sup 232}Th, and {sup 186,188,190,192}Os. Except for {sup 160}Dy, the measured B(E2) values range from 2-3 Weisskopf units in {sup 156}Gd to 10-20 Weisskopf units in osmium nuclei; enhancement that is consistent with collective modes of motion.

  2. Modification of Einstein A Coefficient in Dissipative Gas Medium

    Science.gov (United States)

    Cao, Chang-Qi; Cao, Hui; Qin, Ke-Cheng

    1996-01-01

    Spontaneous radiation in dissipative gas medium such as plasmas is investigated by Langevin equations and the modified Weisskopf-Wigner approximation. Since the refractive index of gas medium is expected to be nearly unity, we shall first neglect the medium polarization effect. We show that absorption in plasmas may in certain case modify the Einstein A coefficient significantly and cause a pit in the A coefficient-density curves for relatively low temperature plasmas and also a pit in the A coefficient-temperature curves. In the next, the effect of medium polarization is taken into account in addition. To our surprise, its effect in certain case is quite significant. The dispersive curves show different behaviors in different region of parameters.

  3. Multiperturber effects in the Faraday spectrum of Rb atoms immersed in a high-density Xe gas

    Science.gov (United States)

    Woerdman, J. P.; Blok, F. J.; Kristensen, M.; Schrama, C. A.

    1996-02-01

    We have measured the D1 and D2 Faraday spectrum and absorption spectrum of Rb atoms immersed in high-density Xe buffer gas in the range nXe=0.8-4.5×1020 cm-3. We find that the shape of the Faraday spectrum obeys the Becquerel relation over this whole density range; however the relative strength of the Faraday effect compared to absorption changes rather abruptly near nXe=1×1020 cm-3. This is ascribed to the onset of a many-body nature (overlapping collisions) of the Rb:Xe line broadening; the number of perturbers within the Weisskopf sphere is unity at nXe~1×1020 cm-3.

  4. Large fragment production calculations in relativistic heavy-ion reactions

    International Nuclear Information System (INIS)

    Seixas de Oliveira, L.F.

    1978-12-01

    The abrasion-ablation model is briefly described and then used to calculate cross sections for production of large fragments resulting from target or projectile fragmentation in high-energy heavy-ion collisions. The number of nucleons removed from the colliding nuclei in the abrasion stage and the excitation energy of the remaining fragments (primary products) are calculated with the geometrical picture of two different models: the fireball and the firestreak models. The charge-to-mass dispersion of the primary products is calculated using either a model which assumes no correlations between proton and neutron positions inside the nucleus (hypergeometric distribution) or a model based upon the zero-point oscillations of the giant dipole resonance (NUC-GDR). Standard Weisskopf--Ewing statistical evaporation calculations are used to calculate final product distributions. Results of the pure abrasion-ablation model are compared with a variety of experimental data. The comparisons show the insufficiency of the extra-surface energy term used in the abrasion calculations. A frictional spectator interaction (FSI) is introduced which increases the average excitation energy of the primary products, and improves the results considerably in most cases. Agreements and discrepancies of the results calculated with the different theoretical assumptions and the experimental data are studied. Of particular relevance is the possibility of observing nuclear ground-state correlations.Results of the recently completed experiment of fragmentation of 213 Mev/A 40 Ar projectiles are studied and shown not to be capable of answering that question unambiguously. But predictions for the upcoming 48 Ca fragmentation experiment clearly show the possibility of observing correlation effects. 78 references

  5. Artificial light harvesting by dimerized Möbius ring

    Science.gov (United States)

    Xu, Lei; Gong, Z. R.; Tao, Ming-Jie; Ai, Qing

    2018-04-01

    We theoretically study artificial light harvesting by a Möbius ring. When the donors in the ring are dimerized, the energies of the donor ring are split into two subbands. Because of the nontrivial Möbius boundary condition, both the photon and acceptor are coupled to all collective-excitation modes in the donor ring. Therefore, the quantum dynamics in the light harvesting is subtly influenced by dimerization in the Möbius ring. It is discovered that energy transfer is more efficient in a dimerized ring than that in an equally spaced ring. This discovery is also confirmed by a calculation with the perturbation theory, which is equivalent to the Wigner-Weisskopf approximation. Our findings may be beneficial to the optimal design of artificial light harvesting.

  6. The development of science this century. 1 - from 1900 to World War II

    Energy Technology Data Exchange (ETDEWEB)

    Weisskopf, Victor F.

    1994-05-15

    This is the first in a series of three articles which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. They describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the modest size of an issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch.

  7. Einstein, Ethics and the Atomic Bomb

    Science.gov (United States)

    Rife, Patricia

    2005-03-01

    Einstein voiced his ethical views against war as well as fascism via venues and alliances with a variety of organizations still debated today. In 1939, he signed a letter to President Roosevelt (drafted by younger colleagues Szilard, Wigner and others) warning the U.S.government about the danger of Nazi Germany gaining control of uranium in the Belgian-controlled Congo in order to develop atomic weapons, based on the discovery of fission by Otto Hahn and Lise Meitner. In 1945, he became a member of the Princeton-based ``Emergency Committee for Atomic Scientists'' organized by Bethe, Condon, Bacher, Urey, Szilard and Weisskopf. Rare Einstein slides will illustrate Dr.Rife's presentation on Albert Einstein's philosophic and ethical convictions about peace, and public stance against war (1914-1950).

  8. The development of science this century. 1 - from 1900 to World War II

    International Nuclear Information System (INIS)

    Weisskopf, Victor F.

    1994-01-01

    This is the first in a series of three articles which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. They describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the modest size of an issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch

  9. Identification of highly deformed even–even nuclei in the neutron- and proton-rich regions of the nuclear chart from the B(E2)↑ and E2 predictions in the generalized differential equation model

    International Nuclear Information System (INIS)

    Nayak, R.C.; Pattnaik, S.

    2015-01-01

    We identify here the possible occurrence of large deformations in the neutron- and proton-rich (n-rich and p-rich) regions of the nuclear chart from extensive predictions of the values of the reduced quadrupole transition probability B(E2)↑ for the transition from the ground state to the first 2 + state and the corresponding excitation energy E2 of even–even nuclei in the recently developed generalized differential equation (GDE) model exclusively meant for these physical quantities. This is made possible from our analysis of the predicted values of these two physical quantities and the corresponding deformation parameters derived from them such as the quadrupole deformation β 2 , the ratio of β- 2 to the Weisskopf single-particle β 2(sp) and the intrinsic electric quadrupole moment Q 0 , calculated for a large number of both known as well as hitherto unknown even–even isotopes of oxygen to fermium (0 to FM; Z = 8 – 100). Our critical analysis of the resulting data convincingly support possible existence of large collectivity for the nuclides 30,32 Ne, 34 Mg, 60 Ti, 42,62,64 Cr, 50,68 Fe, 52,72 Ni, 72,70,96 Kr, 74,76 Sr, 78,80,106,108 Zr, 82,84,110,112 Mo, 140 Te, 144 Xe, 148 Ba, 122 Ce, 128,156 Nd, 130,132,158,160 Sm and 138,162,164,166 Gd, whose values of β 2 are found to exceed 0.3 and even 0.4 in some cases. Our findings of large deformations in the exotic n-rich regions support the existence of another “island of inversion” in the heavy-mass region possibly caused by breaking of the N = 70 subshell closure. (author)

  10. Calculating Absolute Transition Probabilities for Deformed Nuclei in the Rare-Earth Region

    Science.gov (United States)

    Stratman, Anne; Casarella, Clark; Aprahamian, Ani

    2017-09-01

    Absolute transition probabilities are the cornerstone of understanding nuclear structure physics in comparison to nuclear models. We have developed a code to calculate absolute transition probabilities from measured lifetimes, using a Python script and a Mathematica notebook. Both of these methods take pertinent quantities such as the lifetime of a given state, the energy and intensity of the emitted gamma ray, and the multipolarities of the transitions to calculate the appropriate B(E1), B(E2), B(M1) or in general, any B(σλ) values. The program allows for the inclusion of mixing ratios of different multipolarities and the electron conversion of gamma-rays to correct for their intensities, and yields results in absolute units or results normalized to Weisskopf units. The code has been tested against available data in a wide range of nuclei from the rare earth region (28 in total), including 146-154Sm, 154-160Gd, 158-164Dy, 162-170Er, 168-176Yb, and 174-182Hf. It will be available from the Notre Dame Nuclear Science Laboratory webpage for use by the community. This work was supported by the University of Notre Dame College of Science, and by the National Science Foundation, under Contract PHY-1419765.

  11. Incomplete fusion analysis of the 7Li-induced reaction on 93Nb within 3-6.5 MeV/nucleon

    Science.gov (United States)

    Kumar, Deepak; Maiti, Moumita

    2017-10-01

    Background: It is understood from the recent experimental studies that prompt/resonant breakup, and transfer followed by breakup in the weakly bound Li,76-induced reactions play a significant role in the complete-incomplete fusion (CF-ICF), suppression/enhancement in the fusion cross section around the Coulomb barrier. Purpose: Investigation of ICF over CF by measuring cross sections of the populated residues, produced via different channels in the 7Li-induced reaction on a natNb target within the 3-6.5 MeV/nucleon energy region. Method: The 7Li beam was allowed to hit the self-supporting 93Nb targets, backed by the aluminium (Al) foil alternately, within 3-6.5 MeV/nucleon energy. Populated residues were identified by offline γ -ray spectrometry. Measured excitation functions of different channels were compared with different equilibrium and pre-equilibrium models. Result: The enhancement in cross sections in the proton (˜20 -30 MeV) and α -emitting channels, which may be ascribed to ICF, was observed in the measured energy range when compared to the Hauser-Feshbach and exciton model calculations using empire, which satisfactorily reproduces the neutron channels, compared to the Weisskopf-Ewing model and hybrid Monte Carlo calculations. The increment of the incomplete fusion fraction was observed with rising projectile energy. Conclusion: Contrary to the alice14, experimental results are well reproduced by the empire throughout the measured energy range. The signature of ICF over CF indicates that the breakup/transfer processes are involved in the weakly bound 7Li-induced reaction on 93Nb slightly above the Coulomb barrier.

  12. Calculations of Excitation Functions of Some Structural Fusion Materials for ( n, t) Reactions up to 50 MeV Energy

    Science.gov (United States)

    Tel, E.; Durgu, C.; Aktı, N. N.; Okuducu, Ş.

    2010-06-01

    Fusion serves an inexhaustible energy for humankind. Although there have been significant research and development studies on the inertial and magnetic fusion reactor technology, there is still a long way to go to penetrate commercial fusion reactors to the energy market. Tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. So, the working out the systematics of ( n, t) reaction cross sections is of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at different energies. In this study, ( n, t) reactions for some structural fusion materials such as 27Al, 51V, 52Cr, 55Mn, and 56Fe have been investigated. The new calculations on the excitation functions of 27Al( n, t)25Mg, 51V( n, t)49Ti, 52Cr( n, t)50V, 55Mn( n, t)53Cr and 56Fe( n, t)54Mn reactions have been carried out up to 50 MeV incident neutron energy. In these calculations, the pre-equilibrium and equilibrium effects have been investigated. The pre-equilibrium calculations involve the new evaluated the geometry dependent hybrid model, hybrid model and the cascade exciton model. Equilibrium effects are calculated according to the Weisskopf-Ewing model. Also in the present work, we have calculated ( n, t) reaction cross-sections by using new evaluated semi-empirical formulas developed by Tel et al. at 14-15 MeV energy. The calculated results are discussed and compared with the experimental data taken from the literature.

  13. CERN physicist receives Einstein Medal

    CERN Multimedia

    2006-01-01

    On 29 June the CERN theorist Gabriele Veneziano was awarded the prestigious Albert Einstein Medal for significant contributions to the understanding of string theory. This award is given by the Albert Einstein Society in Bern to individuals whose scientific contributions relate to the work of Einstein. Former recipients include exceptional physicists such as Murray Gell-Mann last year, but also Stephen Hawking and Victor Weisskopf. Gabriele Veneziano, a member of the integrated CERN Theory Team since 1977, led the Theory Division from 1994 to 1997 and has already received many prestigious prizes for his outstanding work, including the Enrico Fermi Prize (see CERN Courier, November 2005), the Dannie Heineman Prize for mathematical physics of the American Physical Society in 2004 (see Bulletin No. 47/2003), and the I. Ya. Pomeranchuk Prize of the Institute of Theoretical and Experimental Physics (Moscow) in 1999.

  14. The structure of nuclear matter; La structure de la matiere nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    Report on the most recent developments in the theory of systems of interacting fermions. After having given the general form of the ground state energy in perturbation theory, one indicates the terms whose summation tends to the Brueckner approximation. The numerical results obtained with this theory are briefly mentioned. A discussion is given about the difficulty occurring in the case of potentials which are attractive near the Fermi surface due to the existence of the Cooper bound states. The main ideas of the proof of the Van Hove Hugenholtz theorem on the Fermi energy are indicated and its implications for the Brueckner theory are given. The extension of the methods described here to statistical mechanics are briefly mentioned. In the discussion, Professor Brueckner reports on the most recent results concerning the application of his theory to finite nuclei. Professor Migdal reviews the results which he has obtained with the method of Green's functions on the superconducting properties of finite nuclei. Professor Weisskopf gives a qualitative explanation of the success of the Brueckner theory. (author) [French] Rapport sur les recents travaux portant sur la theorie des systemes de fermions en interaction. Apres avoir donne la forme generale du developpement de l'energie de l'etat fondamental dans la theorie des perturbations, on indique les termes dont la sommation conduit a l'approximation de Brueckner. Les resultats numeriques obtenus a l'aide de cette theorie sont brievement mentionnes. Une discussion est donnee de la difficulte survenant dans le cas des potentiels attractifs au voisinage de la surface de Fermi par suite de l'existence des etats lies de Cooper. Les idees essentielles de la demonstration du theoreme de Van Hove et Hugenholtz sur l'energie de Fermi sont indiquees ainsi que ses consequences pour la theorie de Brueckner. L'extension des methodes decrites a la Mecanique Statistique est brievement mentionnee. Dans la discussion, le professeur

  15. European Committee for Future Accelerators

    International Nuclear Information System (INIS)

    Mulvey, John

    1983-01-01

    Nearly 21 years ago, in December 1962, Viktor Weisskopf and Cecil Powell, then respectively CERN's Director General and Chairman of the Scientific Policy Committee, called together a group of European high energy physicists to advise on steps to reach higher energy. The CERN PS had been in operation since 1959, its experimental programme was well established and the time had come to think of the future. The Chairman of the group, which later took the title 'European Committee for Future Accelerators', was Edoardo Amaldi and his influential report, presented to the CERN Council in June 1963, reviewed the whole structure and possible development of the field in the CERN Member States. Its proposals included the construction of the Intersecting Storage Rings (ISR), and of a 300 GeV proton accelerator which was then envisaged as being the major facility of a second CERN Laboratory elsewhere in Europe

  16. Value, utility and needs

    DEFF Research Database (Denmark)

    Ravn, Ib

    Positive Psychology and Self-Determination Theory (SDT, Deci & Ryan, 2000) share an ambition to identify the sources of value and human flourishing. In the 19th century, the emerging science of economics similarly investigated the nature of social value in a capitalist society that placed a price...... in SDT have gathered extensive evidence that needs for autonomy, competence and relatedness must be met for human flourishing and eudaimonia to occur (Ryan, Huta and Deci, 2013). There is a short step to arguing that what is valuable in life is what satisfies human needs. As has been established by SDT...... on every commodity. In this conceptual paper, SDTs’ mini-theory of Basic Psychological Needs is applied to solve the “paradox of value” in economics (Weisskopf, 1956): Why are some things that are practically free, such as oxygen, water and fellowship, so valuable to people, while others, often craved...

  17. Study of the Magnetically Induced QED Birefringence of the Vacuum in experiment OSQAR

    CERN Document Server

    AUTHOR|(CDS)2083980

    Classical electrodynamics in a vacuum is a linear theory and does not foresee photon-photon scattering or other nonlinear effects between electromagnetic fields. In 1936 Euler, Heisenberg and Weisskopf put framework, in the earliest development of quantum electrodynamics (QED), that vacuum can behave as a birefringent medium in the presence of the external transverse magnetic field. This phenomenon is known as Vacuum Magnetic Birefringence (VMB) and it is still challenging for optical metrology since the first calculations in 1970. When linearly polarized light travels through the strong transverse magnetic field in vacuum, the polarization state of the light would change to elliptical. The difference in the refraction indexes of the ordinary and extraordinary ray is directly related to fundamental constants, such as fine structure constant or Compton wavelength. Contributions to VMB could also arise from the existence of light scalar or pseudoscalar particles, such as axions or axions like particles. Axions ...

  18. Cross section measurements of the (n,2n) reaction with 14 MeV neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Harumi; Shiokawa, Takanobu [Tohoku Univ., Sendai (Japan). Faculty of Science; Suehiro, Teruo; Yagi, Masuo

    1975-07-01

    Cross sections are measured for the reactions /sup 64/Zn(n, 2n)/sup 63/Zn, /sup 75/As(n, 2n)/sup 74/As, /sup 79/Br(n, 2n)/sup 78/Br, /sup 90/Zr(n, 2n)/sup 89/Zr, /sup 141/Pr(n, 2n)/sup 140/Pr and /sup 144/Sm(n, 2n)/sup 143/Sm by activation method in the energy range 13.5-14.8 MeV. The cross sections are determined relatively to the cross section for the /sup 63/Cu(n, 2n)/sup 62/Cu and /sup 19/F(n, 2n)/sup 18/F reactions. Before the cross section measurement, incident-neutron energies are measured by recoil proton method. The results of the cross sections are compared with data existing in the literatures and are discussed with reference to the theory of Weisskopf and Ewing.

  19. Analysis of average radiation widths of neutron resonances

    International Nuclear Information System (INIS)

    Malezki, H.; Popov, A.B.; Trzeciak, K.

    1982-01-01

    On the basis of the available data on parameters of neutron resonances average values of radiation widths (GITAsub(γ)) are calculated for a wide range of nuclei in the 50 upto 250 atomic weight range. Experimental values are compared with different variants of theoretical estimates of GITAsub(γ) which are reduced to the GITAsub(γ) dependence upon atomic weight A, excitation energy U and level density parameter a as GITAsub(γ)=CAsup(α)Usup(β)asup(γ). Besides, empirical values C, α, β, γ are selected satisfying the experimental data best of all. It is determined that the use of a=kA hypothesis leads to a sufficiently better agreement between all theoretical estimates of GITAsub(γ) and experimental values. It turned out that the estimations by Weisskopf, Bondarenko-Urin or with empirically chosen parameters give an approximately similar correspondence of calculated values GITAsub(γ)sup(p) to experimental data [ru

  20. The Variable Crab Nebula: Evidence for a Connection Between GeV Flares and Hard X-ray Variations

    Science.gov (United States)

    Wilson-Hodge, Colleen A.; Harding, A. K.; Hays, E. A.; Cherry, M. L.; Case, G. L.; Finger, M. H.; Jenke, P.; Zhang, X.

    2016-01-01

    In 2010, hard X-ray variations (Wilson-Hodge et al. 2011) and GeV flares (Tavani et al 2011, Abdo et al. 2011) from the Crab Nebula were discovered. Connections between these two phenomena were unclear, in part because the timescales were quite different, with yearly variations in hard X-rays and hourly to daily variations in the GeV flares. The hard X-ray flux from the Crab Nebula has again declined since 2014, much like it did in 2008-2010. During both hard X-ray decline periods, the Fermi LAT detected no GeV flares, suggesting that injection of particles from the GeV flares produces the much slower and weaker hard X-ray variations. The timescale for the particles emitting the GeV flares to lose enough energy to emit synchrotron photons in hard X-rays is consistent with the yearly variations observed in hard X-rays and with the expectation that the timescale for variations slowly increases with decreasing energy. This hypothesis also predicts even slower and weaker variations below 10 keV, consistent with the non-detection of counterparts to the GeV flares by Chandra (Weisskopf et al 2013). We will present a comparison of the observed hard X-ray variations and a simple model of the decay of particles from the GeV flares to test our hypothesis.

  1. People and things. CERN Courier, Oct 1988, v. 28(8)

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1988-10-15

    The article reports on achievements of various people, staff changes and position opportunities within the CERN organization and contains news updates on upcoming or past events. With $11 million in pre-construction funds for the proposed KAON Factory, a major reorganization took place at the Canadian TRIUMF laboratory in Vancouver on September 1. A EULIMA (European Light Ion Medical Accelerator) workshop on the potential value of light ion beam therapy will be held at the Centre Anthoine-Lacassagne, Nice, France, from 3-5 November. As a tribute to Viktor Weisskopf on his 80th birthday, an international colloquium 'Science, Culture and Peace' was organized by CERN and by the 'Ettore Majorana' Centre for Scientific Culture, Erice, Sicily, at CERN on 19 and 20 September. An advanced accelerator physics course, organized jointly by the CERN Accelerator School and Uppsala University, Sweden, and placing special emphasis on the problems of small rings, will be held from 18-29 September in Uppsala.

  2. MANAGEMENT BOARD MEETING OF 25 APRIL 2002

    CERN Document Server

    2002-01-01

    Death of a Former CERN Director-General The Director-General informed the Management Board that Victor Weisskopf, Director-General of CERN from 1961-1965, had died in New York on Sunday, 21 April at the age of 93. Fiftieth Anniversary of the Provisional Establishment of CERN The Director-General observed that 2002 marked the fiftieth anniversary of the 1952 agreement establishing a provisional European Council for Nuclear Research (CERN), which had been signed at the second session of an Inter-Governmental Meeting held in Geneva on 15 February that year. The 'provisional CERN' had been dissolved on 29 September 1954, when Member States had ratified the Convention formally establishing the European Organization for Nuclear Research. Later in the year it would be appropriate to start to discuss plans for an event to mark CERN's official fiftieth anniversary in September 2004. Outcome of the Recent Meetings of the Resources Review Boards R. Cashmore, Director for Collider Programmes, briefly reported on the outc...

  3. Optical Search for QED vacuum magnetic birefringence, Axions and photon Regeneration

    CERN Multimedia

    Pugnat, P; Hryczuk, A; Finger, M; Finger, M; Kral, M

    2007-01-01

    Since its prediction in 1936 by Euler, Heisenberg and Weisskopf in the earlier development of the Quantum Electrodynamic (QED) theory, the Vacuum Magnetic Birefringence (VMB) is still a challenge for optical metrology techniques. According to QED, the vacuum behaves as an optically active medium in the presence of an external magnetic field. It can be experimentally probed with a linearly polarized laser beam. After propagating through the vacuum submitted to a transverse magnetic field, the polarization of the laser beam will change to elliptical and the parameters of the polarization are directly related to fundamental constants such as the fine structure constant and the electron Compton wavelength. Contributions to the VMB could also arise from the existence of light scalar or pseudo-scalar particles like axions that couple to two photons and this would manifest itself as a sizeable deviation from the initial QED prediction. On one side, the interest in axion search, providing an answer to the strong-CP p...

  4. Albert Einstein Centenary

    CERN Document Server

    Amati, Daniele; Weisskopf, Victor Frederick; CERN. Geneva

    1979-01-01

    The scientist and his work by D. AMATI and S. FUBINI. A socially engaged scientist by V. F. WEISSKOPF. This week, we pay homage to Albert Einstein, the giant of twentieth-century physics born exactly 100 years ago on 14 March 1879 in Ulm, Germany. At the height of his career, Einstein made a whole series of monumental contributions to physics, including the elaborate theories of special and general relativity which revolutionized human thought and marked a major breakthrough in our understanding to the Universe. Along with quantum mechanics, relativity is one of the twin pillars of understanding which allow us here at CERN to study the behaviour of the tiniest components of matter. The development of quantum mechanics took the combined efforts of some of the greatest scientists the world has known, while relativity was developed almost single-handed by Einstein. The centenary of his birth is being commemorated all over the world. Exhibitions and symposia are being organized, books published, postage stamps is...

  5. Albert Einstein Centenary

    CERN Document Server

    Weisskopf, Victor Frederick; CERN. Geneva

    1979-01-01

    A socially engaged scientist by V. F. WEISSKOPF. On the origin of the Einstein-Russell statement on nuclear weapon by H. S. BURHOP. This week, we pay homage to Albert Einstein, the giant of twentieth-century physics born exactly 100 years ago on 14 March 1879 in Ulm, Germany. At the height of his career, Einstein made a whole series of monumental contributions to physics, including the elaborate theories of special and general relativity which revolutionized human thought and marked a major breakthrough in our understanding to the Universe. Along with quantum mechanics, relativity is one of the twin pillars of understanding which allow us here at CERN to study the behaviour of the tiniest components of matter. The development of quantum mechanics took the combined efforts of some of the greatest scientists the world has known, while relativity was developed almost single-handed by Einstein. The centenary of his birth is being commemorated all over the world. Exhibitions and symposia are being organized, books...

  6. Consistent Probabilistic Description of the Neutral Kaon System: Novel Observable Effects

    CERN Document Server

    Bernabeu, J.; Villanueva-Perez, P.

    2013-01-01

    The neutral Kaon system has both CP violation in the mass matrix and a non-vanishing lifetime difference in the width matrix. This leads to an effective Hamiltonian which is not a normal operator, with incompatible (non-commuting) masses and widths. In the Weisskopf-Wigner Approach (WWA), by diagonalizing the entire Hamiltonian, the unphysical non-orthogonal "stationary" states $K_{L,S}$ are obtained. These states have complex eigenvalues whose real (imaginary) part does not coincide with the eigenvalues of the mass (width) matrix. In this work we describe the system as an open Lindblad-type quantum mechanical system due to Kaon decays. This approach, in terms of density matrices for initial and final states, provides a consistent probabilistic description, avoiding the standard problems because the width matrix becomes a composite operator not included in the Hamiltonian. We consider the dominant-decay channel to two pions, so that one of the Kaon states with definite lifetime becomes stable. This new approa...

  7. The development of science this century. 3 - from 1970 to the near future

    Energy Technology Data Exchange (ETDEWEB)

    Weisskopf, Victor F.

    1994-07-15

    This is the final article in a series of three which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. Together they describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the restrictions of a single issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch. The first, covering the period from 1900 to World War II, was published in the May issue, page 1. The second, extending from 1946 to about 1970, appeared in the June issue, page 9.

  8. People and things. CERN Courier, Oct 1988, v. 28(8)

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The article reports on achievements of various people, staff changes and position opportunities within the CERN organization and contains news updates on upcoming or past events. With $11 million in pre-construction funds for the proposed KAON Factory, a major reorganization took place at the Canadian TRIUMF laboratory in Vancouver on September 1. A EULIMA (European Light Ion Medical Accelerator) workshop on the potential value of light ion beam therapy will be held at the Centre Anthoine-Lacassagne, Nice, France, from 3-5 November. As a tribute to Viktor Weisskopf on his 80th birthday, an international colloquium 'Science, Culture and Peace' was organized by CERN and by the 'Ettore Majorana' Centre for Scientific Culture, Erice, Sicily, at CERN on 19 and 20 September. An advanced accelerator physics course, organized jointly by the CERN Accelerator School and Uppsala University, Sweden, and placing special emphasis on the problems of small rings, will be held from 18-29 September in Uppsala

  9. Golden Jubilee photos: ISR - The first proton-proton interactions

    CERN Document Server

    2004-01-01

    At the inauguration ceremony for the Intersecting Storage Rings (ISR) on 16 October 1971, the man in charge of their construction, Kjell Johnsen, presented the "key" to the machine to Edoardo Amaldi, President of Council. Seated on the stage with them for this symbolic event were Victor Weisskopf, Marcel Antonioz, Willy Jentschke (seen on the left of the photo) and Werner Heisenberg (on the far right). On 27 January that year, in a world premier, signals produced by proton-proton collisions had been observed at the ISR. The protons, supplied by the PS, were injected into two identical rings, each measuring 300 metres in diameter, and collided head on at the 8 points where the rings intersected. The installation, which remained in operation until 1984, gave physicists access to a wide range of energies for hadron physics, hitherto restricted to the data from cosmic ray studies. The many technological challenges that were met at the ISR, in the fields of vacuum technology and stochastic cooling for instance,...

  10. Observation/confirmation of hindered E2 strength in {sup 18}C/{sup 16}C

    Energy Technology Data Exchange (ETDEWEB)

    Ong, H.J. [Osaka University, RCNP, Ibaraki, Osaka (Japan); Imai, N. [KEK, Tsukuba, Ibaraki (Japan); Suzuki, D.; Iwasaki, H.; Onishi, T.K.; Suzuki, M.K.; Nakao, T.; Ichikawa, Y. [University of Tokyo, Department of Physics, Bunkyo,Tokyo (Japan); Sakurai, H.; Takeuchi, S.; Kondo, Y.; Aoi, N.; Baba, H.; Bishop, S.; Ishihara, M.; Kubo, T.; Motobayashi, T.; Yanagisawa, Y. [RIKEN, RIKEN Nishina Center, Wako, Saitama (Japan); Ota, S. [University of Tokyo, RIKEN Campus, CNS, Wako, Saitama (Japan); Togano, Y.; Kurita, K. [Rikkyo University, Department of Physics, Toshima, Tokyo (Japan); Nakamura, T.; Okumura, T. [Tokyo Institute of Technology, Department of Physics, Meguro, Tokyo (Japan)

    2009-12-15

    We have measured the lifetime of the first excited 2{sup +} state in {sup 18}C using an upgraded recoil shadow method to determine the electric quadrupole transition. The measured mean lifetime is 18.9{+-}0.9 (stat){+-}4.4 (syst) ps, corresponding to B(E2;2{sub 1} {sup +}{yields} 0{sup +} {sub gs}) = 4.3{+-}0.2{+-}1.0 e{sup 2}fm{sup 4}, or about 1.5 Weisskopf units. The mean lifetime of the first 2{sup +} state in {sup 16}C was remeasured to be 18.3{+-}1.4{+-}4.8 ps, about four times shorter than the value reported previously. The discrepancy is explained by incorporating the {gamma} -ray angular distribution obtained in this work into the previous measurement. The observed transition strengths in {sup 16,18}C are hindered compared to the empirical values, indicating that the anomalous E2 strength observed in {sup 16}C persists in {sup 18}C. (orig.)

  11. Waveguide quantum electrodynamics in squeezed vacuum

    Science.gov (United States)

    You, Jieyu; Liao, Zeyang; Li, Sheng-Wen; Zubairy, M. Suhail

    2018-02-01

    We study the dynamics of a general multiemitter system coupled to the squeezed vacuum reservoir and derive a master equation for this system based on the Weisskopf-Wigner approximation. In this theory, we include the effect of positions of the squeezing sources which is usually neglected in the previous studies. We apply this theory to a quasi-one-dimensional waveguide case where the squeezing in one dimension is experimentally achievable. We show that while dipole-dipole interaction induced by ordinary vacuum depends on the emitter separation, the two-photon process due to the squeezed vacuum depends on the positions of the emitters with respect to the squeezing sources. The dephasing rate, decay rate, and the resonance fluorescence of the waveguide-QED in the squeezed vacuum are controllable by changing the positions of emitters. Furthermore, we demonstrate that the stationary maximum entangled NOON state for identical emitters can be reached with arbitrary initial state when the center-of-mass position of the emitters satisfies certain conditions.

  12. The development of science this century. 2 - from 1946 to 1970

    International Nuclear Information System (INIS)

    Weisskopf, Victor F.

    1994-01-01

    This is the second in a series of three articles which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. Together they describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the restrictions of a single issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch. The first, covering the period from 1900 to World War II, was published in the May issue. The third article will cover the period from 1970 to the end of the century

  13. Reduced electric-octupole transition probabilities, B(E3;O1+ → 31-), for even-even nuclides throughout the periodic table

    International Nuclear Information System (INIS)

    Spear, R.H.

    1988-11-01

    Adopted values for the excitation energy, E x( 3 1 - ), of the first 3 - state of the even-even nuclei are tabulated. Values of the reduced electric-octupole transition probability, B(E3;O 1 + → 3 1 - ), from the ground state to this state, as determined from Coulomb excitation, lifetime measurements, inelastic electron scattering, deformation parameters β 3 obtained from angular distributions of inelastically scattered nucleons and light ions, and other miscellaneous procedures are listed in separate Tables. Adopted values for B(E3; O 1 + → 3 1 - ) are presented in Table VII, together with the E3 transition strengths, in Weisskopf units, and the product E x( 3 1 - ) x B(E3; O 1 + → 3 1 - - ) expressed as a percentage of the energy-weighted E3 sum-rule strength. An evaluation is made of the reliability of B(E3; O 1 + → 3 1 - ) values deduced from deformation parameters β 3 . The literature has been covered to March 1988

  14. The development of science this century. 3 - from 1970 to the near future

    International Nuclear Information System (INIS)

    Weisskopf, Victor F.

    1994-01-01

    This is the final article in a series of three which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. Together they describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the restrictions of a single issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch. The first, covering the period from 1900 to World War II, was published in the May issue, page 1. The second, extending from 1946 to about 1970, appeared in the June issue, page 9

  15. REMINDER - Compliance with Operational Circular No. 2 on conditions of access to the fenced CERN sites

    CERN Multimedia

    Relations with the Host States Service

    2004-01-01

    The purpose of Operational Circular No. 2 is to contribute to the protection of people and property by defining the conditions of access to the Organization's fenced sites. However, recently, the services concerned have noted a significant increase in the instances of non-compliance with those conditions that cannot be tolerated, for example: use of CERN access cards by people, other than the cardholders themselves, in order to gain access to facilities without having attended the required safety course; speeding, particularly on Route Gregory and Route Weisskopf; driving in and out of the site on the wrong side of the road; parking on spaces set aside for the disabled; nuisance parking, especially in the proximity of the Restaurants; the dumping of wrecked vehicles. As the aforementioned instances of non-compliance can lead to dangerous situations, the Organization reserves the right to apply the penalties provided for under paragraph 26 of Operational Circular No. 2, namely to refuse access to the site ...

  16. Compliance with Operational Circular No. 2 on conditions of access to the fenced CERN sites

    CERN Multimedia

    Relations with the Host States Service

    2004-01-01

    The purpose of Operational Circular No. 2 is to contribute to the protection of people and property by defining the conditions of access to the Organization's fenced sites. However, recently, the services concerned have noted a significant increase in the instances of non-compliance with those conditions that cannot be tolerated, for example: use of CERN access cards by people, other than the cardholders themselves, in order to gain access to facilities without having attended the required safety course; speeding, particularly on Route Gregory and Route Weisskopf; driving in and out of the site on the wrong side of the road; parking on spaces set aside for the disabled; nuisance parking, especially in the proximity of the Restaurants; the dumping of wrecked vehicles. As the aforementioned instances of non-compliance can lead to dangerous situations, the Organization reserves the right to apply the penalties provided for under paragraph 26 of Operational Circular No. 2, namely to refuse access to the site...

  17. REMINDER: Compliance with Operational Circular No. 2 (Rev. 1) on “Conditions of access to the fenced CERN site”

    CERN Multimedia

    2012-01-01

    The purpose of Operational Circular No. 2 (Rev. 1) is to contribute to the protection of people and property by defining the conditions of access to the Organization's fenced sites. The behaviours that cannot be tolerated under any circumstances are: use of CERN access cards by people, other than the cardholders themselves, in order to gain access to facilities without having attended the required safety course; speeding, particularly on Route Gregory and Route Weisskopf; driving in and out of the site on the wrong side of the road; parking on spaces set aside for the disabled; nuisance parking, especially in the proximity of the restaurants; dumping of wrecked vehicles. As the aforementioned instances of non-compliance can lead to dangerous situations, the Organization reserves the right to apply the penalties provided for under paragraph 26 of Operational Circular No. 2 (Rev. 1), namely to refuse access to the site to people and/or their vehicles deemed to be in infringement of the circu...

  18. Quantum Humor: The Playful Side of Physics at Bohr's Institute for Theoretical Physics

    Science.gov (United States)

    Halpern, Paul

    2012-09-01

    From the 1930s to the 1950s, a period of pivotal developments in quantum, nuclear, and particle physics, physicists at Niels Bohr's Institute for Theoretical Physics in Copenhagen took time off from their research to write humorous articles, letters, and other works. Best known is the Blegdamsvej Faust, performed in April 1932 at the close of one of the Institute's annual conferences. I also focus on the Journal of Jocular Physics, a humorous tribute to Bohr published on the occasions of his 50th, 60th, and 70th birthdays in 1935, 1945, and 1955. Contributors included Léon Rosenfeld, Victor Weisskopf, George Gamow, Oskar Klein, and Hendrik Casimir. I examine their contributions along with letters and other writings to show that they offer a window into some issues in physics at the time, such as the interpretation of complementarity and the nature of the neutrino, as well as the politics of the period.

  19. Tests of the methods of analysis of picosecond lifetimes and measurement of the half-life of the 569.6 keV level in 207Pb

    International Nuclear Information System (INIS)

    Lima, E. de; Kawakami, H.; Lima, A. de; Hichwa, R.; Ramayya, A.V.; Hamilton, J.H.; Dunn, W.; Kim, H.J.

    1978-01-01

    Customarily one extracts the half-life of the nuclear state from a delayed time spectrum by an analysis of the centroid shift, the slope and lately by the convolution method. Recently there have been two formulas relating the centroid shift to the half-life of the nuclear state. These two procedures can give different results for the half-life when Tsub(1/2) the same order or less than the time width of one channel. An extensive investigation of these two formulas and precedures has been made by measuring the half-life of the first excited state in 207 Pb at 569.6 keV. This analysis confirms Bay's formula relating the centroid shift to the half-life of the state. The half-life of the 569.6 keV level in 207 Pb is measured to be (129+-3) ps in excellent agreement with Weisskopf's single particle estimate of 128 ps for an E2 transition. (Auth.)

  20. The development of science this century. 2 - from 1946 to 1970

    Energy Technology Data Exchange (ETDEWEB)

    Weisskopf, Victor F.

    1994-06-15

    This is the second in a series of three articles which together are a slightly revised version of a talk delivered at the meeting of the American Association for the Advancement of Science, in Boston, on 14 February 1993, and at a CERN Colloquium, on 5 August 1993, entitled 'Science - yesterday, today and tomorrow'. Together they describe the tremendous growth of scientific knowledge and insights acquired since the beginning of this century. In a highly abridged form, some of these ideas were used in an earlier CERN Courier article ('Crisis - the Weisskopf view'; October 1993, page 22). Because of the restrictions of a single issue of the CERN Courier, the text has been repackaged as three articles, each covering an identifiable historical epoch. The first, covering the period from 1900 to World War II, was published in the May issue. The third article will cover the period from 1970 to the end of the century.

  1. Advanced X-Ray Telescope Mirrors Provide Sharpest Focus Ever

    Science.gov (United States)

    1997-03-01

    Performing beyond expectations, the high- resolution mirrors for NASA's most powerful orbiting X-ray telescope have successfully completed initial testing at Marshall Space Flight Center's X-ray Calibration Facility, Huntsville, AL. "We have the first ground test images ever generated by the telescope's mirror assembly, and they are as good as -- or better than -- expected," said Dr. Martin Weisskopf, Marshall's chief scientist for NASA's Advanced X-ray Astrophysics Facility (AXAF). The mirror assembly, four pairs of precisely shaped and aligned cylindrical mirrors, will form the heart of NASA's third great observatory. The X-ray telescope produces an image by directing incoming X-rays to detectors at a focal point some 30 feet beyond the telescope's mirrors. The greater the percentage of X-rays brought to focus and the smaller the size of the focal spot, the sharper the image. Tests show that on orbit, the mirror assembly of the Advanced X-ray Astrophysics Facility will be able to focus approximately 70 percent of X-rays from a source to a spot less than one-half arc second in radius. The telescope's resolution is equivalent to being able to read the text of a newspaper from half a mile away. "The telescope's focus is very clear, very sharp," said Weisskopf. "It will be able to show us details of very distant sources that we know are out there, but haven't been able to see clearly." In comparison, previous X-ray telescopes -- Einstein and Rosat -- were only capable of focusing X- rays to five arc seconds. The Advanced X-ray Telescope's resolving power is ten times greater. "Images from the new telescope will allow us to make major advances toward understanding how exploding stars create and disperse many of the elements necessary for new solar systems and for life itself," said Dr. Harvey Tananbaum, director of the Advanced X- ray Astrophysics Facility Science Center at the Smithsonian Astrophysical Observatory, in Cambridge, MA -- responsible for the telescope

  2. An investigation into the 2 Si(p,gamma)30P reaction

    International Nuclear Information System (INIS)

    Oberholzer, P.

    1978-01-01

    In the experiment reported here, information was obtained on the energy levels of 30 P by means of the 2 Si(p,gamma) 30 P-reaction. The experimental work was done with two accelerators, the 3 MV Van de Graaff- accelerator of the AEB and the 2,5 MV Van de Graaff-accelerator of the P.U. for C.H.E. A 60 cm 3 - and a 80 cm 3 Ge(Li)-detector were used. The excitation curve of the 2 Si + p-reaction was measured in the 1,3 - 2,0 MeV energy range. In order to calculate proton energies which were more accurate, the Q-value of the reaction was redetermined. The gamma decay of 12 resonances in the energy range l,l - 1,9 MeV was studied. The branching ratios of 25 bound levels in 30 P were determined, as well as the excitation energy and branching ratios of two new bound levels. A different value for the excitation energy of one bound level was found. The mean lifetimes of 12 bound levels were measured by means of the doppler shift attenuation method and the results were compared to those of other groups using different methods of lifetime measurement. Spin and parity assignments based on Weisskopf estimates were made for a number of resonance states, as well as for one new bound state. The experimental results were compared with the results of two models

  3. Study of the coefficients of internal conversion for transition energies approaching the threshold

    International Nuclear Information System (INIS)

    Farani Coursol, Nelcy.

    1979-01-01

    Internal conversion coefficients were determined experimentally with great accuracy for areas of transition energies, which constitute tests for the theories (energies at the most ten kEv above the threshold of K shell), then the results obtained were compared with the values calculated (or to be calculated) from theoretical models. Owing to the difficulties raised by the precise determination of the internal conversion coefficients (ICC), in the first stage we selected radionuclides with a relatively simple decay pattern, the transitions: 30 keV of sup(93m)Nb, 35 keV of sup(125m)Te, 14 keV of 57 Fe and 39 keV of sup(129m)Xe. It was observed that 'problems' exist with respect to the ICC's of the great multipolarity transitions, so the transitions of this kind were examined in a systematic manner. The possibility of penetration effects occurring for the transitions studied experimentally was examined. The considerations are presented which 'authorized' us to disregard the dynamic part of the ICC for the transitions approaching the threshold (L selection rules and life of nuclear levels in relation to Weisskopf-Moszkowski estimations). The Kurie straight line was determined experimentally for the β - transition and the Qsub(β) was evaluated with an important accuracy gain compared with the values available at present. Finally, a certain number of ICC's of transitions already determined with good precision were recalculated, in order to extend our analysis and detect any possible systematic errors [fr

  4. A quantum theory of the self-energy of non-relativistic fermions and of the Coulomb-Yukawa force acting between them

    International Nuclear Information System (INIS)

    Ernst, V.

    1978-01-01

    The idea of the systematic Weisskopf-Wigner approximation as used sporadically in atomic physics and quantum optics, is extended here to the interaction of a field of non-relativistic fermions with a field of relativistic bosons. It is shown that the usual (non-existing) interaction Hamiltonian of this system can be written as a sum of a countable number of self-adjoint and bounded partial Hamiltonians. The system of these Hamiltonians defines the order hierarchy of the present approximation scheme. To demonstrate its physical utility it is shown that in a certain order it provides satisfactory quantum theory of the 'self-energy' of the fermions under discussion. This is defined as the binding energy of bosons bound to the fermions and building up the latter's 'individual Coulomb or Yukawa fields' in the sense of expectation values of the corresponding field operator. In states of more than one fermion the bound photons act as a mediating agent between the fermions; this mechanism closely resembles the Coulomb or Yukawa 'forces' used in conventional non-relativistic quantum mechanics. (author)

  5. History at your fingertips

    CERN Multimedia

    2003-01-01

    Would you have liked to meet Victor Weisskopf, Niels Bohr or Robert Oppenheimer? Do you wish you had visited the famous Gargamelle or UA1 experiments? Well, you still can... at least on video. Thanks to the ETT Division's Document Handling (DH) Group, you now have access to the Audiovisual Service's amazing archives, where you can meet the leading figures and relive the main events in the Laboratory's history. All this without even having to leave your office. The DH Group has launched a new on-line catalogue comprising a selection of the Audiovisual Service's collection. These films were digitised from the original D3, Numatic or VHS tapes, and you can watch them from your work station. The desire to make the Laboratory's historical visual resources available to as many users as possible had already led to the creation in 1997 of an on-line catalogue of photographs covering the period from 1951 to the present day. (See http://weblib.cern.ch/Photos/). The new video catalogue provides you with the rare opport...

  6. Jean-Paul Diss (1928-2012)

    CERN Multimedia

    2012-01-01

    We were greatly saddened to learn of the sudden death of Dr Jean-Paul Diss at his home on 7 June 2012.   Jean-Paul studied medicine at the Strasbourg Faculty of Medicine and began his career as an occupational medical practitioner at the Mulhouse potash mines. He then came to CERN in 1965 to set up a Medical Service at the request of the then CERN Director-General, Professor Weisskopf. He was the first person to hold the position of Head of the Medical Service and he invested all his energies to provide the Organization with an occupational healthcare unit worthy of the name. As a pioneer of occupational medicine, he worked tirelessly to improve the working conditions of the members of the personnel and continued to be solicitous about the health of every member of the personnel until his retirement in 1993.  Over the past twenty years, he remained active within the CERN Pensioners Association, in particular as the pensioners’ representative on the CERN Health Insuran...

  7. Octupole deformation in 226Th

    International Nuclear Information System (INIS)

    Liang, C.F.; Paris, P.; Sheline, R.K.

    1999-01-01

    Sources of 230 U - resulting from the β - decay of 230 Pa produced in the reaction 232 Th(p,3n) with 34 MeV protons - were purified and used to determine the half life of the 230.4 keV 1 - state in 226 Th. Using the Doppler shift method, following α decay of 230 U, a value T 1/2 = 3.5 ± 1.2 ps was determined. The half life of the 226.4 keV 4 + state was also measured as T 1/2 = 145 ± 20 ps. The absolute probabilities in Weisskopf units of the 230 keV and 158 keV γ transitions depopulating the 1 - state were determined as (2.50 ± 0.86) x 10 -3 and (4.54 ± 1.55) x 10 -3 respectively. Experimental values of the intrinsic dipole moment, D 0 , and of D 0 /Q 0 , were determined as 0.27 ± 0.05 e.fm and (3.5 ± 1.0) x 10 -4 fm -1 . These data have been compared with theories and experiment. (authors)

  8. A beam line to inspire

    CERN Multimedia

    2014-01-01

    Just as the sunshine seems to have arrived back at CERN, in other respects summer is coming to a close as we say our farewells to this year’s crop of summer students. This injection of young people – always a welcome feature in July and August at CERN – dates back to the early 1960s, when the Summer Student programme began under one of my predecessors Vicky Weisskopf.   The idea was to awaken the interest of undergraduates in CERN's activities by offering them the chance of hands-on experience during their long summer vacation. Around the same time, the CERN School of Physics began. Aimed at young postgraduates, it led to the current European School of High-Energy Physics and related schools in Latin America and the Asia-Pacific region. Over the years, it was joined by CERN schools on accelerator subjects and computing, which have expanded CERN’s training mandate. These days, our efforts begin with young people before they go to university &...

  9. Particle accelerators from Big Bang physics to hadron therapy

    CERN Document Server

    Amaldi, Ugo

    2015-01-01

    The theoretical physicist Victor “Viki” Weisskopf, Director-General of CERN from 1961 to 1965, once “There are three kinds of physicists, namely the machine builders, the experimental physicists, and the theoretical physicists. […] The machine builders are the most important ones, because if they were not there, we would not get into this small-scale region of space. If we compare this with the discovery of America, the machine builders correspond to captains and ship builders who really developed the techniques at that time. The experimentalists were those fellows on the ships who sailed to the other side of the world and then landed on the new islands and wrote down what they saw. The theoretical physicists are those who stayed behind in Madrid and told Columbus that he was going to land in India.” Rather than focusing on the theoretical physicists, as most popular science books on particle physics do, this beautifully written and also entertaining book is different in that, firstly, the main foc...

  10. Dependence of dipole transition gamma ray strength on the type of nucleus

    International Nuclear Information System (INIS)

    Cojocaru, V.; Stefanescu, Irina; Popescu, I.V.; Badica, T.; Olariu, A.

    2000-01-01

    The strength of gamma-ray transition is defined as the ratio between the experimental radiative width Γ γ and the theoretical radiative width calculated according to a model (for example Weisskopf single particle model, Γ W ). It is important to know on which parameters this strengths depend. In our previous work we put in evidence the dependence of the dipole transition gamma-ray strengths on the type of the nucleus. In this paper we look for a possible dependence of the quadrupole gamma-ray strengths on the type of nucleus (doubly-even, doubly-odd, with odd proton number and odd neutron number). All the input data are taken from the National Nuclear Data Center, Brookhaven. In order to demonstrate this possible dependence one can use the average of the strongest 10% transitions of given character. As the A dependence is concerned we use the following A-regions: 6-20, 21-44, 45-90, 91-150, 151-200. An average value for these transitions is also plotted both for the E2 and M2 transitions. Generally, all the functions log 10 vs A (S=Γ γ /Γ W ) have the same pattern as 'total' put in evidence by Endt. Moreover, there is a clear difference in the most A regions of the average S 10 values for different types of nuclei. As the RUL (Recommended Upper Limits W.u.) are concerned they have to be established as the highest experimental values of the transition strengths. In this work we suggest new RUL but this time in connection with the type of the nucleus. A table with the RUL depending on the nuclear type, for E2 and M2 transitions, respectively, is given. The number of M2 transitions is quite small. In this case, one might set the recommended upper limits with some precaution. (authors)

  11. Study of hyperfine anomaly in 9,11Be isotopes

    International Nuclear Information System (INIS)

    Parfenova, Y.; Leclercq-Willain

    2005-01-01

    The study of the hyperfine anomaly of neutron rich nuclei, in particular, neutron halo nuclei, can give a very specific and unique way to measure their neutron distribution and confirm a halo structure. The hyperfine structure anomaly in Be + ions is calculated with a realistic electronic wave function, obtained as a solution of the Dirac equation. In the calculations, the Coulomb potential modified by the charge distribution of the clustered nucleus and three electrons in the configuration 1s 2 2s is used. The nuclear wave function for the 11 Be nucleus is obtained in the core + nucleon model, and that for the 9 Be nucleus is calculated in the three-cluster (α+α + n) model. The aim of this study is to test whether the hyperfine structure anomaly reflects an extended spatial structure of '1 1 Be. The results of the calculations are listed. ε BW is the hyperfine anomaly in the Bohr-Weisskopf effect and δ is the charge structure correction, μ is the calculated magnetic moment, and μ exp is the experimental value of the magnetic moment, Q and Q exp are the calculated and measured values of the quadrupole moment. The results for 9 Be are obtained with two different three-body wave functions (WF1 and WF2) showing the sensitivity of the calculations to the input parameters. The value of ε BW is sensitive to the weights of the states in the nuclear ground state wave function. The total hyperfine anomaly value εε BW +δ in 11 Be differs from that in 9 Be by 25%. This gives a measure of the accuracy of the hyperfine anomaly measurements needed to study the neutron distribution in the Be isotopes. (authors)

  12. Dynamics and quantum Zeno effect for a qubit in either a low- or high-frequency bath beyond the rotating-wave approximation

    International Nuclear Information System (INIS)

    Cao Xiufeng; You, J. Q.; Zheng, H.; Kofman, A. G.; Nori, Franco

    2010-01-01

    We use a non-Markovian approach to study the decoherence dynamics of a qubit in either a low- or high-frequency bath modeling the qubit environment. This is done for two separate cases: either with measurements or without them. This approach is based on a unitary transformation and does not require the rotating-wave approximation. In the case without measurement, we show that, for low-frequency noise, the bath shifts the qubit energy toward higher energies (blue shift), while the ordinary high-frequency cutoff Ohmic bath shifts the qubit energy toward lower energies (red shift). In order to preserve the coherence of the qubit, we also investigate the dynamics of the qubit subject to measurements (quantum Zeno regime) in two cases: low- and high-frequency baths. For very frequent projective measurements, the low-frequency bath gives rise to the quantum anti-Zeno effect on the qubit. The quantum Zeno effect only occurs in the high-frequency-cutoff Ohmic bath, after counterrotating terms are considered. In the condition that the decay rate due to the two kinds of baths are equal under the Wigner-Weisskopf approximation, we find that without the approximation, for a high-frequency environment, the decay rate should be faster (without measurements) or slower (with frequent measurements, in the Zeno regime), compared to the low-frequency bath case. The experimental implementation of our results here could distinguish the type of bath (either a low- or high-frequency one) and protect the coherence of the qubit by modulating the dominant frequency of its environment.

  13. IXPE: The Imaging X-ray Polarimetry Explorer, Implementing a Dedicated Polarimetry Mission

    Science.gov (United States)

    Ramsey, Brian

    2014-01-01

    Only a few experiments have conducted x-ray polarimetry of cosmic sources since Weisskopf et al confirmed the 19% polarization of the Crab Nebula with the Orbiting Solar Observatory (OSO-8) in the 70's center dot The challenge is to measure a faint polarized component against a background of non-polarized signal (as well as the other, typical background components) center dot Typically, for a few % minimum detectable polarization, 106 photons are required. center dot So, a dedicated mission is vital with instruments that are designed specifically to measure polarization (with minimal systematic effects) Over the proposed mission life (2- 3 years), IXPE will first survey representative samples of several categories of targets: magnetars, isolated pulsars, pulsar wind nebula and supernova remnants, microquasars, active galaxies etc. The survey results will guide detailed follow-up observations. Precise calibration of IXPE is vital to ensuring sensitivity goals are met. The detectors will be characterized in Italy, and then a full calibration of the complete instrument will be performed at MSFC's stray light facility. Polarized flux at different energies Heritage: X-ray Optics at MSFC polarimetry mission.

  14. Time evolution of K{sup o}-K{sup -o} system in spectral formulation

    Energy Technology Data Exchange (ETDEWEB)

    Nowakowski, M. [INFN, Laboratori Nazionali di Frascati, Rome (Italy)

    1996-02-01

    The time evolution of the K{sup o} - K{sup -o} system is reanalyzed in the language of certain spectral function whose Fourier transforms give the time dependent survival and transition amplitudes. Approximating the spectral function by an one-pole ansatz the paper gives insight into limitation of the validity of one-pole approximation, not only for small/large time scales, but also for intermediate times where new effects, albeit small, are possible. It will be shown that the same validity restrictions apply to the known formulae of Weisskopf-Wigner approximation as well. The present analysis can also be applied to the effect of vacuum regeneration of K{sub L} and K{sub S}, a possibility pointed out by Khalfin. As a result of this possibility new contributions to the well known oscillatory terms will enter the time dependent transition probabilities. These new terms are not associated with small-large time behaviour of the amplitudes and therefore their magnitude is a priori unknown. It will be shown that the order of magnitude of this new effect is very small and, in principle, its exact determination lies outside the scope of the one-pole ansatz.

  15. People and things. CERN Courier, Oct 1995, v. 35(7)

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The article reports on achievements of various people, staff changes and position opportunities within the CERN organization and contains news updates on upcoming or past events: Gentner-Kastler Prize: T he prestigious Gentner-Kastler Prize, jointly awarded by the French and German Physical Societies, goes this year to Walter Schmidt-Parzefall of DESY, formerly leader of the Argus group at the DORIS electron-positron collider at the DESY Laboratory Hamburg, which has made many significant contributions to heavy quark spectroscopy. Subsequently he joined Hamburg University, and has recently played a prominent role in establishing the HERA-B experiment at DESY's HERA electron-proton collider. Before working at DESY, Schmidt-Parzefall spent some time at CERN's Intersecting Storage Rings.; Thirty ISR years: A discreet lunch event at CERN marked the 30th anniversary of the historic decision to go ahead with the Intersecting Storage Rings (ISR) at CERN. Among those present were Victor Weisskopf, CERN's Director General at the time, and Mervyn Hine, responsible for CERN's longterm planning under Weisskopf. The ISR, the world's first proton collider, came into operation in 1971, ahead of schedule, but was shut down in 1984, also ahead of schedule, as part of the bid to divert funds to LEP construction. The ISR, which used the idea of particle stacking to build up the stored beam intensity, was long regarded as a masterpiece of accelerator building, and blazed a trail for CERN's future accelerator projects. Many CERN specialists cut their accelerator teeth at the ISR.; ICTP Dirac Medal: The International Centre for Theoretical Physics (ICTP), Trieste, is awarding its 1995 Dirac Medal to Michael Berry of Bristol for his discovery of the non-integrable phase that arises in adiabatic processes in quantum theory. This effect was first detected in 1986 in an optics experiment by Tomita and Chiao in which the rotation of the polarization plane of a

  16. MADNIX a code to calculate prompt fission neutron spectra and average prompt neutron multiplicities

    International Nuclear Information System (INIS)

    Merchant, A.C.

    1986-03-01

    A code has been written and tested on the CDC Cyber-170 to calculate the prompt fission neutron spectrum, N(E), as a function of both the fissioning nucleus and its excitation energy. In this note a brief description of the underlying physical principles involved and a detailed explanation of the required input data (together with a sample output for the fission of 235 U induced by 14 MeV neutrons) are presented. Weisskopf's standard nuclear evaporation theory provides the basis for the calculation. Two important refinements are that the distribution of fission-fragment residual nuclear temperature and the cooling of the fragments as neutrons are emitted approximately taken into account, and also the energy dependence of the cross section for the inverse process of compound nucleus formation is included. This approach is then used to calculate the average number of prompt neutrons emitted per fission, v-bar p . At high excitation energies, where fission is still possible after neutron emission, the consequences of the competition between first, second and third chance fission on N(E) and v-bar p are calculated. Excellent agreement with all the examples given in the original work of Madland and Nix is obtained. (author) [pt

  17. Electric-dipole-moment enhancement factor for the thallium atom, and a new upper limit on the electric dipole moment of the electron

    International Nuclear Information System (INIS)

    Sandars, P.G.H.; Sternheimer, R.M.

    1975-01-01

    Some time ago, an accurate upper limit on a possible permanent electric dipole moment of the thallium atom in the 6 2 P 1 / 2 ground state was obtained by Gould. The result was D/sub Tl/ = [(1.3 +- 2.4) x 10 -21 cm]e. In connection with this value, a calculation of the electric dipole enhancement factor R/sub Tl/, which is defined as the ratio D/sub Tl//D/sub e/, where D/sub e/is the corresponding upper limit on a possible electric dipole moment of the (valence) electron was carried out. A value R/subTl/ = 700 was obtained, which leads to an upper limit D/sub e/ = [(1.9 +- 3.4) x 10 -24 cm]e. This result is comparable with the value D/sub e/ -24 cm)e previously obtained by Weisskopf et al. from measurements on the cesium atom, and with the result of Player and Sandars of [(0.7 +- 2.2) x 10 -24 cm]e obtained from the search for an electric dipole moment in the 3 P 2 metastable state of xenon. All three results set a stringent upper limit on the amount of a possible violation of T and P invariance in electromagnetic interactions. (U.S.)

  18. Neutron and proton transmutation-activation cross section libraries to 150 MeV for application in accelerator-driven systems and radioactive ion beam target-design studies

    International Nuclear Information System (INIS)

    Koning, A.J.; Chadwick, M.B.; MacFarlane, R.E.; Mashnik, S.; Wilson, W.B.

    1998-05-01

    New transmutation-activation nuclear data libraries for neutrons and protons up to 150 MeV have been created. These data are important for simulation calculations of radioactivity, and transmutation, in accelerator-driven systems such as the production of tritium (APT) and the transmutation of waste (ATW). They can also be used to obtain cross section predictions for the production of proton-rich isotopes in (p,xn) reactions, for radioactive ion beam (RIB) target-design studies. The nuclear data in these libraries stem from two sources: for neutrons below 20 MeV, we use data from the European activation and transmutation file, EAF97; For neutrons above 20 MeV and for protons at all energies we have isotope production cross sections with the nuclear model code HMS-ALICE. This code applies the Monte Carlo Hybrid Simulation theory, and the Weisskopf-Ewing theory, to calculate cross sections. In a few cases, the HMS-ALICE results were replaced by those calculated using the GNASH code for the Los Alamos LA150 transport library. The resulting two libraries, AF150.N and AF150.P, consist of 766 nuclides each and are represented in the ENDF6-format. An outline is given of the new representation of the data. The libraries have been checked with ENDF6 preprocessing tools and have been processed with NJOY into libraries for the Los Alamos transmutation/radioactivity code CINDER. Numerous benchmark figures are presented for proton-induced excitation functions of various isotopes compared with measurements. Such comparisons are useful for validation purposes, and for assessing the accuracy of the evaluated data. These evaluated libraries are available on the WWW at: http://t2.lanl.gov/. 21 refs

  19. Measurements of vacuum magnetic birefringence using permanent dipole magnets: the PVLAS experiment

    Science.gov (United States)

    Della Valle, F.; Gastaldi, U.; Messineo, G.; Milotti, E.; Pengo, R.; Piemontese, L.; Ruoso, G.; Zavattini, G.

    2013-05-01

    The PVLAS collaboration is presently assembling a new apparatus (at the INFN section of Ferrara, Italy) to detect vacuum magnetic birefringence (VMB). VMB is related to the structure of the quantum electrodynamics (QED) vacuum and is predicted by the Euler-Heisenberg-Weisskopf effective Lagrangian. It can be detected by measuring the ellipticity acquired by a linearly polarized light beam propagating through a strong magnetic field. Using the very same optical technique it is also possible to search for hypothetical low-mass particles interacting with two photons, such as axion-like (ALP) or millicharged particles. Here we report the results of a scaled-down test setup and describe the new PVLAS apparatus. This latter is in construction and is based on a high-sensitivity ellipsometer with a high-finesse Fabry-Perot cavity (>4 × 105) and two 0.8 m long 2.5 T rotating permanent dipole magnets. Measurements with the test setup have improved, by a factor 2, the previous upper bound on the parameter Ae, which determines the strength of the nonlinear terms in the QED Lagrangian: A(PVLAS)e < 3.3 × 10-21 T-2 at 95% c.l. Furthermore, new laboratory limits have been put on the inverse coupling constant of ALPs to two photons and confirmation of previous limits on the fractional charge of millicharged particles is given.

  20. Collisions involving energy transfer between atoms with large angular moments

    International Nuclear Information System (INIS)

    Vdovin, Yu.A.; Galitskij, V.M.

    1975-01-01

    Study is made of the collisions of excited and nonexcited atoms with a small resonance defect, assuming that the excited and ground states of each atom are bound via an allowed dipole transition and that intrinsic moments of states are great. In such an approximation the atomic interaction is defined by a dipole-dipole interaction operator. Equations for amplitudes are derived for two cases: (1) the first atom is in an excited state while the second is in the ground state and (2) the first atom is in the ground state while the second is in an excited state. The problem is solved in the approximation that the moments of the excited and ground states of each atom are equal. An expression for the excitation transfer cross section is written down. Analysis of this expression shows that the excitation transfer cross section at first increases with removal from the exact resonance and reaches resonance at lambda approximately 0.1 (lambda is a dimensionless parameter which is equal to the ratio of the resonance defect Δ to the interaction at spacings of the order of the Weisskopf radius). Only at lambda >0.16 does the cross section become smaller than the resonance one. This effect is due to the interaction Hamiltonian approximation adopted in the present study

  1. Investigation of the energy levels of 38AR

    International Nuclear Information System (INIS)

    Waanders, F.B.

    1975-07-01

    In this project information on the energy levels of 38 Ar was obtained by means of the (p,γ) reaction. The 1,1 MeV Cockroft-Walton accelerator of the Potchefstroom University for CHE was used to produce the proton beam while a 80 cm 3 Ge(Li) detector was used to detect the gamma-rays. Precise gamma-branchings were determined for 50 bound levels, of which four have not previously been determined. These branchings were obtained from the 28 resonances studied in the 37 Cl(p,γ) 38 Ar reaction. The resonance with a proton energy of (592 plus minus 3) keV was not detected previously. The resonance energies, Q-value and energies of the bound levels used in this project were taken from the study done by Alderliesten. The mean lifetimes of a few bound levels of 38 Ar were measured by means of the doppler shift attenuation method. The results concerning the bound states and mean lifetimes are in good agreement with previous experiments. Limitations on the spin and parities of 19 (p,γ) resonances have been set by means of Weisskopf estimates. Only those cases for which the spin could be limited to two values are discussed in the text. A summary of experimental data obtained on 38 Ar is compared with the results from shellmodel calculations done by various workers. A short discussion on the analogue states in 38 Ar is also given [af

  2. Model-model Perencanaan Strategik

    OpenAIRE

    Amirin, Tatang M

    2005-01-01

    The process of strategic planning, used to be called as long-term planning, consists of several components, including strategic analysis, setting strategic direction (covering of mission, vision, and values), and action planning. Many writers develop models representing the steps of the strategic planning process, i.e. basic planning model, problem-based planning model, scenario model, and organic or self-organizing model.

  3. Model-to-model interface for multiscale materials modeling

    Energy Technology Data Exchange (ETDEWEB)

    Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)

    2017-12-17

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.

  4. Relaxation to quantum-statistical equilibrium of the Wigner-Weisskopf atom in a one-dimensional radiation field. VIII. Emission in an infinite system in the presence of an extra photon

    International Nuclear Information System (INIS)

    Davidson, R.; Kozak, J.J.

    1978-01-01

    In this paper we study the emission of a two-level atom in a radiation field in the case where one mode of the field is assumed to be excited initially, and where the system is assumed to be of infinite extent. (The restriction to a one-dimensional field, which has been made throughout this series, is not essential: It is made chiefly for ease of presentation of the mathematical methods.) An exact expression is obtained for the probability rho (t) that the two-level quantum system is in the excited state at time t. This problem, previously unsolved in radiation theory, is tackled by reformulating the expression found in VII [J. Math. Phys. 16, 1013 (1975)] of this series for the time evolution of rho (t) in a finite system in the presence of an extra photon, and then constructing the infinite-system limit. A quantitative assessment of the role of the extra photon and of the coupling constant in influencing the dynamics is obtained by studying numerically the expression derived for rho (t) for a particular choice of initial condition. The study presented here casts light on the problem of time-reversal invariance and clarifies the sense in which exponential decay is universal; in particular, we find that: (1) It is the infinite-system limit which converts the time-reversible solutions of VII into the irreversible solution obtained here, and (2) it is the weak-coupling limit that imposes exponential form on the time dependence of the evolution of the system. The anticipated generalization of our methods to more complicated radiation-matter problems is discussed, and finally, several problems in radiation chemistry and physics, already accessible to exact analysis given the approach introduced here, are cited

  5. Models and role models.

    Science.gov (United States)

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  6. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  7. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  8. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  9. b-baryon light-cone distribution amplitudes and a dynamical theory for [bq] [ anti b anti q]-tetraquarks

    International Nuclear Information System (INIS)

    Hambrock, Christian

    2011-04-01

    In my thesis I present our work on the bottom-baryon light-cone distribution amplitudes (LCDAs) and on the [bq][ anti b anti q]-tetraquarks. For the former we extended the known LCDAs for the ground state baryon Λ b to the entire b-baryon ground state multiplets and included s-quark mass-breaking effects. The LCDAs form crucial input for the calculations of characteristic properties of b-baryon decays. In this context they can for example be used in the calculation of form factors for semileptonic flavor-changing neutral-current (FCNC) decays. For the [bq][ anti b anti q]-tetraquarks, we calculated the tetraquark mass spectrum for all quarks q=u,d,s,c in a constituent Hamiltonian quark model. We estimated the electronic width by introducing a generalized Van Royen-Weisskopf formula for the tetraquarks, and evaluated the partial hadronic two-body and total decay widths for the tetraquarks with quantum numbers J PC =1 -- . With this input, we performed a Breit-Wigner fit, including the tetraquark contributions, to the inclusive R b -spectrum measured by BaBar. The obtained χ 2 /d.o.f. of the BaBar R b -scan data is fairly good. The resulting fits are suggestive of tetraquark states but not conclusive. We developed a model to describe the transitions e + e - →Y b →Υ(nS)(π + π - ,K + K - ,ηπ 0 ), in which Y b is a 1 -- tetraquark state. The model includes the exchange of light tetraquark and meson states. We used this model to fit the invariant-mass and helicity spectra for the dipionic final state measured by Belle and used the results to estimate the spectra of the channels e + e - →Y b →Υ(nS)(K + K - ,ηπ 0 ). The spectra are enigmatic in shape and magnitude and defy an interpretation in the framework of the standard bottomonia, requesting either an interpretation in terms of exotic states, such as tetraquarks, or a radical alteration of the, otherwise successful, QCD-based bottomonium-model. The tetraquark hypothesis describes the current data well

  10. Gyromagnetic ratios of the 4+ and 6+ rotational states of 184W

    International Nuclear Information System (INIS)

    Alzner, A.; Bodenstedt, E.; Herrmann, C.; Herzog, P.; Muenning, H.; Reif, H.; Vianden, R.; Wrede, U.

    1982-01-01

    The nuclear Larmor precession has been observed for the 2 + , 4 + and 6 + rotational states of 184 W in the hyperfine field of WFe by application of the TDPAC and the IPAC techniques. A carrier free radioactive source of 184 sup(m)Re alloyed with high purity iron was used for all three measurements. From the Larmor precession observed in the 2 + state by TDPAC ωsub(L) = 944(15) MHz and the known g-factor the hyperfine field Bsup(hf)sub(300K) = f(WFe) = 69.6(27)T was derived. The deviation from the result of a spin echo experiment with 183 WFe extrapolated to room temperature may be caused by the Bohr-Weisskopf effect (hyperfine anomaly). IPAC measurements with the same sample polarized in an external magnetic field of 1.6 T gave for the 4 + and 6 + rotational states: ωsub(L)tau(4 + ) = 0.0609(22) and ωsub(L)tau(6 + ) = 0.00707(98). By use of experimental B(E2)-values the gsub(R)-factors were derived as gsub(R)(4 + ) = +0.276(26) and gsub(R)(6 + ) = +0.270(43). The directional correlation of the 537-384 keV γ-γ cascade has been analysed in terms of an E1/M2/E3 mixture for the K-forbidden 573 keV transition. We obtained the mixing ratios delta(M2/E1) = +0.086(16), delta(E3/E1) = -0.028(5) with the sign convention of Krane and Steffen. (orig.)

  11. Vector models and generalized SYK models

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)

    2017-05-23

    We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  12. [Bone remodeling and modeling/mini-modeling.

    Science.gov (United States)

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  13. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  14. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  15. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  16. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  17. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  18. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  19. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  20. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  1. Models Archive and ModelWeb at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  2. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  3. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    , and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  4. Modeling energy-economy interactions using integrated models

    International Nuclear Information System (INIS)

    Uyterlinde, M.A.

    1994-06-01

    Integrated models are defined as economic energy models that consist of several submodels, either coupled by an interface module, or embedded in one large model. These models can be used for energy policy analysis. Using integrated models yields the following benefits. They provide a framework in which energy-economy interactions can be better analyzed than in stand-alone models. Integrated models can represent both energy sector technological details, as well as the behaviour of the market and the role of prices. Furthermore, the combination of modeling methodologies in one model can compensate weaknesses of one approach with strengths of another. These advantages motivated this survey of the class of integrated models. The purpose of this literature survey therefore was to collect and to present information on integrated models. To carry out this task, several goals were identified. The first goal was to give an overview of what is reported on these models in general. The second one was to find and describe examples of such models. Other goals were to find out what kinds of models were used as component models, and to examine the linkage methodology. Solution methods and their convergence properties were also a subject of interest. The report has the following structure. In chapter 2, a 'conceptual framework' is given. In chapter 3 a number of integrated models is described. In a table, a complete overview is presented of all described models. Finally, in chapter 4, the report is summarized, and conclusions are drawn regarding the advantages and drawbacks of integrated models. 8 figs., 29 refs

  5. On the role of model structure in hydrological modeling : Understanding models

    NARCIS (Netherlands)

    Gharari, S.

    2016-01-01

    Modeling is an essential part of the science of hydrology. Models enable us to formulate what we know and perceive from the real world into a neat package. Rainfall-runoff models are abstract simplifications of how a catchment works. Within the research field of scientific rainfall-runoff modeling,

  6. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    Science.gov (United States)

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  7. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.

  8. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  10. Comparison: Binomial model and Black Scholes model

    Directory of Open Access Journals (Sweden)

    Amir Ahmad Dar

    2018-03-01

    Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.

  11. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  12. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  13. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  14. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  15. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...

  16. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  17. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    Over the past year, several modifications have been made to the NEMS Transportation Model, incorporating greater levels of detail and analysis in modules previously represented in the aggregate or under a profusion of simplifying assumptions. This document is intended to amend those sections of the Model Documentation Report (MDR) which describe these superseded modules. Significant changes have been implemented in the LDV Fuel Economy Model, the Alternative Fuel Vehicle Model, the LDV Fleet Module, and the Highway Freight Model. The relevant sections of the MDR have been extracted from the original document, amended, and are presented in the following pages. A brief summary of the modifications follows: In the Fuel Economy Model, modifications have been made which permit the user to employ more optimistic assumptions about the commercial viability and impact of selected technological improvements. This model also explicitly calculates the fuel economy of an array of alternative fuel vehicles (AFV`s) which are subsequently used in the estimation of vehicle sales. In the Alternative Fuel Vehicle Model, the results of the Fuel Economy Model have been incorporated, and the program flows have been modified to reflect that fact. In the Light Duty Vehicle Fleet Module, the sales of vehicles to fleets of various size are endogenously calculated in order to provide a more detailed estimate of the impacts of EPACT legislation on the sales of AFV`s to fleets. In the Highway Freight Model, the previous aggregate estimation has been replaced by a detailed Freight Truck Stock Model, where travel patterns, efficiencies, and energy intensities are estimated by industrial grouping. Several appendices are provided at the end of this document, containing data tables and supplementary descriptions of the model development process which are not integral to an understanding of the overall model structure.

  18. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  19. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...

  20. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  1. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  2. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  3. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  4. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  5. Quenching of Einstein A-Coefficients in plasmas and lasers

    International Nuclear Information System (INIS)

    Suckewer, S.; Princeton Univ., NJ

    1991-03-01

    The coefficient of spontaneous emission (Einstein A-coefficient) is considered to be one of the basic constants of a given transition in atom or ion. The formula for the Einstein A-coefficient was derived in the pioneering works of Weisskopf and Wigner (WW) based on Dirac's theory of light. More recently, however, it was noted in several papers that the rate of spontaneous radiative decay can deviate significantly from the WW expression in certain conditions, for example in a laser cavity. A different type of change in A- coefficients was inferred from measurements of changes in the intensity branching ratio of spectral lines in a plasma. A change of branching ratio of up to a factor of 10 was observed in CIV for 3p-3s (580.1--581.2nm) and 3p-2s (31.2-nm) transitions when the electron density changed from approximately N e ∼ 1 x 10 18 to 5 x 10 18 cm -3 . This effect was also observed in CIII and NV. An initial theoretical approach to the problem based on the integration of the Schroedinger equation with the ion Coulomb potential modified by the electron cloud within the Debye radius was unsuccessfully in predicting the experimental observations. The effect of quenching of spontaneous emission coefficients was observed also in an Ar-ion laser as a function of the intracavity power density (photon density) for lines originating from the same upper level as the lasing line. Measurements of these line profiles absorption for different lasing conditions and related discussions are also presented. 14 refs., 6 figs

  6. History of CERN. V. 2

    International Nuclear Information System (INIS)

    Hermann, A.; Krige, J.; Mersits, U.; Pestre, D.; Weiss, L.

    1990-01-01

    This volume of the History of CERN starts at 8 October 1954, when the Council of the new organization met for the first time, and takes the history through the mid-1960's. when it was decided to equip the laboratory with a second generation of accelerators and a new Director-General was nominated. It covers the building and the running of the laboratory during these dozen years, it studies the construction and exploitation of the 600 MeV Synchro-cyclotron and the 28 GeV Proton Synchrotron, it considers the setting up of the material and organizational infrastructure which made this possible, and it covers the reigns of four Director-Generals, Felix Bloch, Cornelis Bakker, John Adams and Victor Weisskopf. Part I describes the various aspects which together constitute the history of CERN and aims to offer a synchronic panorama year by year account of CERN's main activities. Part II deals primarily with technological achievements and scientific results and it includes the most technical chapters in the volume. Part III defines how the CERN 'system' functioned, how this science-based organization worked, how it chose, planned and concretely realized its experimental programme on the shop-floor and how it identified the equipment it would need in the long term and organized its relations with the outside world, notably the political world. The concluding Part IV brings out the specificity of CERN, to identify the ways in which it differed from other big science laboratories in the 1950's and 1960's, and to try to understand where its uniqueness and originality lay. (author). refs.; figs.; tabs

  7. On small things in water moving around: Purcell's contributions to biology

    Science.gov (United States)

    Berg, Howard

    2012-02-01

    I went to see Purcell after finishing my course work for the Ph.D. (1961) to ask whether I might join his group. ``But I don't have any graduate students,`` he said. ``Why is that?'' I asked. ``I can't think of anything to do,'' he replied. That was a wipe out. After I had finished my Ph.D. with Ramsey on the hydrogen maser (1964), Ed and I came up with an idea that led to work on sedimentation field-flow fractionation (PNAS 1967). We had hoped this method would be useful for biology, but problems of adsorption of proteins to surfaces stood in the way. Then I moved over to the biology department and got interested in the motile behavior of bacteria (1968). Here was a subject that I thought Ed would really enjoy. There were wonderful movies made by Norbert Pfennig of experiments done by Theodor Engelmann in the 1880's. We found a 16-mm projector and looked at these movies on Ed's office wall. Ed's first comment proved seminal, ''How can such a small cell swim in a straight line?'' We thought about how cells count molecules in their environment and wrote ``Physics of chemoreception,'' (Biophys. J.,1977). In the meantime, Ed gave a memorable lecture at Viki Weisskopf's retirement symposium, his classic ``Life at low Reynolds number'' (Am. J. Phys. 1977). Ed really wanted to understand what it would be like to swim like a bacterium! He wasn't very interested in what the literature had to say about such a problem, he wanted to think it through for himself. My role was straight man. I very much enjoyed the ride.

  8. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    Science.gov (United States)

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  9. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  10. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  11. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  12. Population balance models: a useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2015-01-01

    Population balance models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by distributions. This distribution of properties under transient conditions has been demonstrated in many chemical engineering applications. Modelling...

  13. From Product Models to Product State Models

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...

  14. North American Carbon Project (NACP) Regional Model-Model and Model-Data Intercomparison Project

    Science.gov (United States)

    Huntzinger, D. N.; Post, W. M.; Jacobson, A. R.; Cook, R. B.

    2009-05-01

    Available observations are localized and widely separated in both space and time, so we depend heavily on models to characterize, understand, and predict carbon fluxes at regional and global scales. The results from each model differ because they use different approaches (forward vs. inverse), modeling strategies (detailed process, statistical, observation based), process representation, boundary conditions, initial conditions, and driver data. To investigate these differences we conducted a model-model and model-data comparison using available forward ecosystem model and atmospheric inverse output, along with regional scale inventory data. Forward or "bottom-up" models typically estimate carbon fluxes through a set of physiological relationships, and are based on our current mechanistic understanding of how carbon is exchanged within ecosystems. Inverse or "top-down" analyses use measured atmospheric concentrations of CO2, coupled with an atmospheric transport model to infer surface flux distributions. Although bottom-up models do fairly well at reproducing measured fluxes (i.e., net ecosystem exchange) at a given location, they vary considerably in their estimates of carbon flux over regional or continental scales, suggesting difficulty in scaling mechanistic relationships to large areas and/or timescales. Conversely, top-down inverse models predict fluxes that are quantitatively consistent with atmospheric measurements, suggesting that they are capturing large scale variability in flux quite well, but offer limited insights into the processes controlling this variability and how fluxes vary at fine spatial scales. The analyses focused on identifying and quantifying spatial and temporal patterns of carbon fluxes among the models; quantifying across-model variability, as well as comparing simulated or estimated surface fluxes and biomass to observed values at regional to continental scales for the period 2000-2005. The analysis focused on the following three

  15. Population Balance Models: A useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2014-01-01

    Population Balance Models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by statistical distributions. This has been demonstrated in many chemical engineering applications. Modelling efforts of several current and future unit...

  16. Model Metric untuk Mengukur Fleksibilitas Model Proses Bisnis

    Directory of Open Access Journals (Sweden)

    Endang Wahyu Pamungkas

    2014-10-01

    Full Text Available Abstrak Organisasi bisnis dunia saat ini banyak memanfaatkan sistem informasi digital untuk memberikan pemahaman mengenai manajemen proses bisnis yang mereka jalani. Pemanfaatan sistem Enterprise Resource Planning (ERP merupakan contoh teknologi dalam manajemen proses bisnis. Melalui sistem ini perusahaan dapat membangun dan mengembangkan proses bisnis. Selain itu, perusahaan juga dapat menyesuaikan proses bisnis secara cepat terhadap perubahan yang terjadi seiring bertambahnya kebutuhan dan informasi, berubahnya kondisi pasar, atau perubahan kebijakan. Sehubungan dengan perubahan proses bisnis yang sering terjadi, maka aspek fleksibilitas terhadap model proses yang dibangun harus ditingkatkan. Dalam mendukung peningkatan fleksibilitas tersebut tentunya dibutuhkan sebuah model untuk mengukur tingkat flesibelitas model proses bisnis. Model tersebut yang kemudian dapat digunakan oleh analis untuk melakukan perbandingan sehingga dapat diperoleh model proses bisnis yang paling fleksibel dan cocok dengan perusahaan. Hal ini dapat dianalisa dengan melibatkan aspek-aspek fleksibel yang telah diteliti pada penelitian-penelitian sebelumnya. Dalam paper ini akan dilakukan penelitian mengenai aspek fleksibitas dalam model proses bisnis untuk menghasilkan model metric yang dapat melakukan kuantifikasi tingkat fleksibilitas pada model proses bisnis. Model metric yang dihasilkan pada penelitian ini mampu melakukan perhitungan fleksibelitas pada model proses bisnis secara kuantitatif. Kata kunci: ERP, fleksibilitas, metadata, model metric, model proses bisnis, variasi Abstract Recently, business organizations in the world are making use of digital information systems to provide an understanding of the business process management in which they live. Utilization of Enterprise Resource Planning (ERP system is an example of technology in business process management. Through this system, some companies can build and develop business process and can quickly adjust

  17. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  18. The ModelCC Model-Driven Parser Generator

    Directory of Open Access Journals (Sweden)

    Fernando Berzal

    2015-01-01

    Full Text Available Syntax-directed translation tools require the specification of a language by means of a formal grammar. This grammar must conform to the specific requirements of the parser generator to be used. This grammar is then annotated with semantic actions for the resulting system to perform its desired function. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators. ModelCC receives a conceptual model as input, along with constraints that annotate it. It is then able to create a parser for the desired textual syntax and the generated parser fully automates the instantiation of the language conceptual model. ModelCC also includes a reference resolution mechanism so that ModelCC is able to instantiate abstract syntax graphs, rather than mere abstract syntax trees.

  19. Environmental Satellite Models for a Macroeconomic Model

    International Nuclear Information System (INIS)

    Moeller, F.; Grinderslev, D.; Werner, M.

    2003-01-01

    To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries

  20. b-baryon light-cone distribution amplitudes and a dynamical theory for [bq] [ anti b anti q]-tetraquarks

    Energy Technology Data Exchange (ETDEWEB)

    Hambrock, Christian

    2011-04-15

    In my thesis I present our work on the bottom-baryon light-cone distribution amplitudes (LCDAs) and on the [bq][ anti b anti q]-tetraquarks. For the former we extended the known LCDAs for the ground state baryon {lambda}{sub b} to the entire b-baryon ground state multiplets and included s-quark mass-breaking effects. The LCDAs form crucial input for the calculations of characteristic properties of b-baryon decays. In this context they can for example be used in the calculation of form factors for semileptonic flavor-changing neutral-current (FCNC) decays. For the [bq][ anti b anti q]-tetraquarks, we calculated the tetraquark mass spectrum for all quarks q=u,d,s,c in a constituent Hamiltonian quark model. We estimated the electronic width by introducing a generalized Van Royen-Weisskopf formula for the tetraquarks, and evaluated the partial hadronic two-body and total decay widths for the tetraquarks with quantum numbers J{sup PC}=1{sup --}. With this input, we performed a Breit-Wigner fit, including the tetraquark contributions, to the inclusive R{sub b}-spectrum measured by BaBar. The obtained {chi}{sup 2}/d.o.f. of the BaBar R{sub b}-scan data is fairly good. The resulting fits are suggestive of tetraquark states but not conclusive. We developed a model to describe the transitions e{sup +}e{sup -}{yields}Y{sub b}{yields}{upsilon}(nS)({pi}{sup +}{pi}{sup -},K{sup +}K{sup -},{eta}{pi}{sup 0}), in which Y{sub b} is a 1{sup --} tetraquark state. The model includes the exchange of light tetraquark and meson states. We used this model to fit the invariant-mass and helicity spectra for the dipionic final state measured by Belle and used the results to estimate the spectra of the channels e{sup +}e{sup -}{yields}Y{sub b}{yields}{upsilon}(nS)(K{sup +}K{sup -},{eta}{pi}{sup 0}). The spectra are enigmatic in shape and magnitude and defy an interpretation in the framework of the standard bottomonia, requesting either an interpretation in terms of exotic states, such as

  1. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  2. Coupling Climate Models and Forward-Looking Economic Models

    Science.gov (United States)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  3. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption.

  4. Autism Spectrum Disorder and Particulate Matter Air Pollution before, during, and after Pregnancy: A Nested Case–Control Analysis within the Nurses’ Health Study II Cohort

    Science.gov (United States)

    Roberts, Andrea L.; Lyall, Kristen; Hart, Jaime E.; Just, Allan C.; Laden, Francine; Weisskopf, Marc G.

    2014-01-01

    Background Autism spectrum disorder (ASD) is a developmental disorder with increasing prevalence worldwide, yet has unclear etiology. Objective We explored the association between maternal exposure to particulate matter (PM) air pollution and odds of ASD in her child. Methods We conducted a nested case–control study of participants in the Nurses’ Health Study II (NHS II), a prospective cohort of 116,430 U.S. female nurses recruited in 1989, followed by biennial mailed questionnaires. Subjects were NHS II participants’ children born 1990–2002 with ASD (n = 245), and children without ASD (n = 1,522) randomly selected using frequency matching for birth years. Diagnosis of ASD was based on maternal report, which was validated against the Autism Diagnostic Interview-Revised in a subset. Monthly averages of PM with diameters ≤ 2.5 μm (PM2.5) and 2.5–10 μm (PM10–2.5) were predicted from a spatiotemporal model for the continental United States and linked to residential addresses. Results PM2.5 exposure during pregnancy was associated with increased odds of ASD, with an adjusted odds ratio (OR) for ASD per interquartile range (IQR) higher PM2.5 (4.42 μg/m3) of 1.57 (95% CI: 1.22, 2.03) among women with the same address before and after pregnancy (160 cases, 986 controls). Associations with PM2.5 exposure 9 months before or after the pregnancy were weaker in independent models and null when all three time periods were included, whereas the association with the 9 months of pregnancy remained (OR = 1.63; 95% CI: 1.08, 2.47). The association between ASD and PM2.5 was stronger for exposure during the third trimester (OR = 1.42 per IQR increase in PM2.5; 95% CI: 1.09, 1.86) than during the first two trimesters (ORs = 1.06 and 1.00) when mutually adjusted. There was little association between PM10–2.5 and ASD. Conclusions Higher maternal exposure to PM2.5 during pregnancy, particularly the third trimester, was associated with greater odds of a child having ASD

  5. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  6. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  7. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  8. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  9. Underground economy modelling: simple models with complicated dynamics

    OpenAIRE

    Albu, Lucian-Liviu

    2003-01-01

    The paper aims to model the underground economy using two different models: one based on the labor supply method and a generalized model for the allocation of time. The model based on the labor supply method is conceived as a simulating one in order to determine some reasonable thresholds of the underground sector extension based only on the available macroeconomic statistical data. The generalized model for the allocation of time is a model based on direct approach which estimates the underg...

  10. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  11. Modeling volatility using state space models.

    Science.gov (United States)

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  12. Document Models

    Directory of Open Access Journals (Sweden)

    A.A. Malykh

    2017-08-01

    Full Text Available In this paper, the concept of locally simple models is considered. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains of discourse can be described as locally simple models, for example, business models of enterprises and companies. Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, the retailer business model is formed from ”jobs”, and each ”job” can be modelled and automated more or less easily. At the same time, the whole retailer model as an integrated system is extremely complex. In this paper, we offer a variant of the mathematical definition of a locally simple model. This definition is intended for modelling a wide range of domains. Therefore, we also must take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract to our models as many people as possible, we need to hide this elitism behind some metaphor, to which ’ordinary’ people are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal - to make the documentary models executable. Executable models are models that can act as practical information systems in the described domain of discourse. Thus, if our model is executable, then programming becomes redundant. The direct use of a model, instead of its programming coding, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the possibilities for automation and

  13. An online model composition tool for system biology models.

    Science.gov (United States)

    Coskun, Sarp A; Cicek, A Ercument; Lai, Nicola; Dash, Ranjan K; Ozsoyoglu, Z Meral; Ozsoyoglu, Gultekin

    2013-09-05

    There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user's input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well.

  14. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  15. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  16. A unification of RDE model and XCDM model

    International Nuclear Information System (INIS)

    Liao, Kai; Zhu, Zong-Hong

    2013-01-01

    In this Letter, we propose a new generalized Ricci dark energy (NGR) model to unify Ricci dark energy (RDE) and XCDM. Our model can distinguish between RDE and XCDM by introducing a parameter β called weight factor. When β=1, NGR model becomes the usual RDE model. The XCDM model is corresponding to β=0. Moreover, NGR model permits the situation where neither β=1 nor β=0. We then perform a statefinder analysis on NGR model to see how β effects the trajectory on the r–s plane. In order to know the value of β, we constrain NGR model with latest observations including type Ia supernovae (SNe Ia) from Union2 set (557 data), baryonic acoustic oscillation (BAO) observation from the spectroscopic Sloan Digital Sky Survey (SDSS) data release 7 (DR7) galaxy sample and cosmic microwave background (CMB) observation from the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) results. With Markov Chain Monte Carlo (MCMC) method, the constraint result is β=0.08 −0.21 +0.30 (1σ) −0.28 +0.43 (2σ), which manifests the observations prefer a XCDM universe rather than RDE model. It seems RDE model is ruled out in NGR scenario within 2σ regions. Furthermore, we compare it with some of successful cosmological models using AIC information criterion. NGR model seems to be a good choice for describing the universe.

  17. Downscaling GISS ModelE Boreal Summer Climate over Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  18. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  19. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  20. ModelMate - A graphical user interface for model analysis

    Science.gov (United States)

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  1. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  2. Mineralogic Model (MM3.0) Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1

  3. Mineralogic Model (MM3.0) Analysis Model Report

    International Nuclear Information System (INIS)

    Lum, C.

    2002-01-01

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M and O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M and O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  4. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  5. Model documentation report: Short-Term Hydroelectric Generation Model

    International Nuclear Information System (INIS)

    1993-08-01

    The purpose of this report is to define the objectives of the Short- Term Hydroelectric Generation Model (STHGM), describe its basic approach, and to provide details on the model structure. This report is intended as a reference document for model analysts, users, and the general public. Documentation of the model is in accordance with the Energy Information Administration's (AYE) legal obligation to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). The STHGM performs a short-term (18 to 27- month) forecast of hydroelectric generation in the United States using an autoregressive integrated moving average (UREMIA) time series model with precipitation as an explanatory variable. The model results are used as input for the short-term Energy Outlook

  6. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  7. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  8. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  9. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  10. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    Science.gov (United States)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  11. The Bond Fluctuation Model and Other Lattice Models

    Science.gov (United States)

    Müller, Marcus

    Lattice models constitute a class of coarse-grained representations of polymeric materials. They have enjoyed a longstanding tradition for investigating the universal behavior of long chain molecules by computer simulations and enumeration techniques. A coarse-grained representation is often necessary to investigate properties on large time- and length scales. First, some justification for using lattice models will be given and the benefits and limitations will be discussed. Then, the bond fluctuation model by Carmesin and Kremer [1] is placed into the context of other lattice models and compared to continuum models. Some specific techniques for measuring the pressure in lattice models will be described. The bond fluctuation model has been employed in more than 100 simulation studies in the last decade and only few selected applications can be mentioned.

  12. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  13. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  14. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  15. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  16. Process models and model-data fusion in dendroecology

    Directory of Open Access Journals (Sweden)

    Joel eGuiot

    2014-08-01

    Full Text Available Dendrochronology (i.e. the study of annually dated tree-ring time series has proved to be a powerful technique to understand tree-growth. This paper intends to show the interest of using ecophysiological modeling not only to understand and predict tree-growth (dendroecology but also to reconstruct past climates (dendroclimatology. Process models have been used for several decades in dendroclimatology, but it is only recently that methods of model-data fusion have led to significant progress in modeling tree-growth as a function of climate and in reconstructing past climates. These model-data fusion (MDF methods, mainly based on the Bayesian paradigm, have been shown to be powerful for both model calibration and model inversion. After a rapid survey of tree-growth modeling, we illustrate MDF with examples based on series of Southern France Aleppo pines and Central France oaks. These examples show that if plants experienced CO2 fertilization, this would have a significant effect on tree-growth which in turn would bias the climate reconstructions. This bias could be extended to other environmental non-climatic factors directly or indirectly affecting annual ring formation and not taken into account in classical empirical models, which supports the use of more complex process-based models. Finally, we conclude by showing the interest of the data assimilation methods applied in climatology to produce climate re-analyses.

  17. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    Science.gov (United States)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  18. Eclipse models

    International Nuclear Information System (INIS)

    Michel, F.C.

    1989-01-01

    Three existing eclipse models for the PSR 1957 + 20 pulsar are discussed in terms of their requirements and the information they yield about the pulsar wind: the interacting wind from a companion model, the magnetosphere model, and the occulting disk model. It is shown out that the wind model requires an MHD wind from the pulsar, with enough particles that the Poynting flux of the wind can be thermalized; in this model, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model requires an EM wind, which is Poynting flux dominated; the advantage of this model over the wind model is that the plasma density inside the magnetosphere can be orders of magnitude larger than in a magnetospheric tail blown back by wind interaction. The occulting disk model also requires an EM wind so that the interaction would be pushed down onto the companion surface, minimizing direct interaction of the wind with the orbiting macroscopic particles

  19. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  20. Using the Model Coupling Toolkit to couple earth system models

    Science.gov (United States)

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  1. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  2. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    Science.gov (United States)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  3. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  4. PENGGUNAAN THE ZMIJEWSKI MODEL, THE ALTMAN MODEL, DAN THE SPRINGATE MODEL SEBAGAI PREDIKTOR DELISTING

    Directory of Open Access Journals (Sweden)

    Mila Fatmawati

    2017-03-01

    Full Text Available The purpose of this study was to investigate empirical evidence that the Zmijewski model, the Altman model, andthe Springate models could be used as a predictor of delisting the company. Object of this study was to remove thelist of companies that trade shares (delisted in Indonesia Stock Exchange in 2003-2009. As a benchmark forcompanies delisted at the top used companies that were still listed on the Stock Exchange with the same numberand kind of business field. Comparison samples were taken randomly over the same period with the companydelisted. The method of analysis used logic regression. The results found that from the three delisting of predictormodels, only the Zmijewski models that could be used to predict the company delisted in the period of observation,while the Altman model and the Springate models could not be used as predictive models delisting. It is becauseThe Zmijewski model emphasized amounts of debt in predict delisting. The bigger the debt was, it would be moreaccurate in predicting as the company’s delisting. Meanwhile, the Altman model and the Springate modelemphasized more on profitability measures. The smaller the profitability was, the more precisely to predictcompany’s delisting. Condition of delisting the company that became object of observation company trends wasstill able to get profit, but it had a relative amount of debt.

  5. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  6. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  7. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  8. Modelling MIZ dynamics in a global model

    Science.gov (United States)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  9. Graphical Rasch models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...

  10. Transforming Graphical System Models to Graphical Attack Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2016-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....

  11. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  12. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  13. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  14. Model coupler for coupling of atmospheric, oceanic, and terrestrial models

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Kobayashi, Takuya; Tsuduki, Katsunori; Kim, Keyong-Ok

    2007-02-01

    A numerical simulation system SPEEDI-MP, which is applicable for various environmental studies, consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, meteorological and geographical databases for model inputs, and system utilities for file management, visualization, analysis, etc., using graphical user interfaces (GUIs). As a numerical simulation tool, a model coupling program (model coupler) has been developed. It controls parallel calculations of several models and data exchanges among them to realize the dynamical coupling of the models. It is applicable for any models with three-dimensional structured grid system, which is used by most environmental and hydrodynamic models. A coupled model system for water circulation has been constructed with atmosphere, ocean, wave, hydrology, and land-surface models using the model coupler. Performance tests of the coupled model system for water circulation were also carried out for the flood event at Saudi Arabia in January 2005 and the storm surge case by the hurricane KATRINA in August 2005. (author)

  15. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  16. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  17. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  18. On the shell model connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.; Levai, G.; Kato, K.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known, but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model, in which not only the cluster model space is obtained from the full shell model space by an SU(3) symmetry-dictated truncation, but SU(3) dynamically symmetric interactions are also applied. Actually, Hamiltonians of this kind proved to be successful in describing the gross features of cluster states in a wide energy range. The novel feature of the present work is that we apply exclusively shell model interactions. The energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental

  19. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  20. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  1. Modeling arson - An exercise in qualitative model building

    Science.gov (United States)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  2. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  3. On the shell-model-connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago [1] as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known [2] but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model [3,4] in order to find an answer to this question, which seems to be affirmative. In particular, the energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental values. The present results show that the simple and transparent SU(3) connection between the spherical shell model and the cluster model is valid not only for the harmonic oscillator interactions, but for much more general (SU(3) dynamically symmetric) Hamiltonians as well, which result in realistic energy spectra. Via

  4. Wind tunnel modeling of roadways: Comparison with mathematical models

    International Nuclear Information System (INIS)

    Heidorn, K.; Davies, A.E.; Murphy, M.C.

    1991-01-01

    The assessment of air quality impacts from roadways is a major concern to urban planners. In order to assess future road and building configurations, a number of techniques have been developed including mathematical models, which simulate traffic emissions and atmospheric dispersion through a series of mathematical relationships and physical models. The latter models simulate emissions and dispersion through scaling of these processes in a wind tunnel. Two roadway mathematical models, HIWAY-2 and CALINE-4, were applied to a proposed development in a large urban area. Physical modeling procedures developed by Rowan Williams Davies and Irwin Inc. (RWDI) in the form of line source simulators were also applied, and the resulting carbon monoxide concentrations were compared. The results indicated a factor of two agreement between the mathematical and physical models. The physical model, however, reacted to change in building massing and configuration. The mathematical models did not, since no provision for such changes was included in the mathematical models. In general, the RWDI model resulted in higher concentrations than either HIWAY-2 or CALINE-4. Where there was underprediction, it was often due to shielding of the receptor by surrounding buildings. Comparison of these three models with the CALTRANS Tracer Dispersion Experiment showed good results although concentrations were consistently underpredicted

  5. Analysis of deregulation models; Denryoku shijo jiyuka model no bunseki

    Energy Technology Data Exchange (ETDEWEB)

    Yajima, M. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1996-04-01

    Trends toward power market deregulation were investigated in Japan and 16 other countries, and various deregulation models were examined and evaluated for their merits and demerits. There are four basic models, that is, franchise bidding model, competitive bidding in power generation model, wholesale wheeling or retail wheeling model, and mandatory pool or voluntary pool model. Power market deregulation has been a global tendency since the second half of the 1970s, with various models adopted by different countries. Out of the above-said models, it is the retail wheeling model and pool models (open access models) that allow the final customer to select power suppliers, and the number of countries adopting these models is increasing. The said models are characterized in that the disintegration of the vertical transmission-distribution integration (separation of distribution service and retail supply service) and the liberation of the retail market are simultaneously accomplished. The pool models, in particular, are enjoying favor because conditions for fair competition have already been prepared and because it is believed high in efficiency. In Japan and France, where importance is attached to atomic power generation, the competitive bidding model is adopted as a means to harmonize the introduction of competition into the source development and power generation sectors. 7 refs., 4 tabs.

  6. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise s...

  7. Statistical Model Checking of Rich Models and Properties

    DEFF Research Database (Denmark)

    Poulsen, Danny Bøgsted

    in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part...... motivates why existing model checking technology should be supplemented by new techniques. It also contains a brief introduction to probability theory and concepts covered by the six papers making up the second part. The first two papers are concerned with developing online monitoring techniques...... systems. The fifth paper shows how stochastic hybrid automata are useful for modelling biological systems and the final paper is concerned with showing how statistical model checking is efficiently distributed. In parallel with developing the theory contained in the papers, a substantial part of this work...

  8. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  9. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  10. Assessing physical models used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Computer codes used to predict the behaviour of aerosols in water-cooled reactor containment buildings after severe accidents contain a variety of physical models. Special models are in place for describing agglomeration processes where small aerosol particles combine to form larger ones. Other models are used to calculate the rates at which aerosol particles are deposited on building structures. Condensation of steam on aerosol particles is currently a very active area in aerosol modelling. In this paper, the physical models incorporated in the current available international codes for all of these processes are reviewed and documented. There is considerable variation in models used in different codes, and some uncertainties exist as to which models are superior. 28 refs

  11. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  12. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  13. EIA model documentation: Petroleum market model of the national energy modeling system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.

  14. EIA model documentation: Petroleum market model of the national energy modeling system

    International Nuclear Information System (INIS)

    1995-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA's legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level

  15. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  16. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  17. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  18. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    Science.gov (United States)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  19. Deformed baryons: constituent quark model vs. bag model

    International Nuclear Information System (INIS)

    Iwamura, Y.; Nogami, Y.

    1985-01-01

    Recently Bhaduri et al. developed a nonrelativistic constituent quark model for deformed baryons. In that model the quarks move in a deformable mean field, and the deformation parameters are determined by minimizing the quark energy subject to the constraint of volume conservation. This constraint is an ad hoc assumption. It is shown that, starting with a bag model, a model similar to that of Bhaduri et al. can be constructed. The deformation parameters are determined by the pressure balance on the bag surface. There is, however, a distinct difference between the two models with respect to the state dependence of the ''volume''. Implications of this difference are discussed

  20. Business Model Innovation: How Iconic Business Models Emerge

    OpenAIRE

    Mikhalkina, T.; Cabantous, L.

    2015-01-01

    Despite ample research on the topic of business model innovation, little is known about the cognitive processes whereby some innovative business models gain the status of iconic representations of particular types of firms. This study addresses the question: How do iconic business models emerge? In other words: How do innovative business models become prototypical exemplars for new categories of firms? We focus on the case of Airbnb, and analyze how six mainstream business media publications ...

  1. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  2. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  3. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  4. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  5. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  6. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  7. Multistate Model Builder (MSMB): a flexible editor for compact biochemical models.

    Science.gov (United States)

    Palmisano, Alida; Hoops, Stefan; Watson, Layne T; Jones, Thomas C; Tyson, John J; Shaffer, Clifford A

    2014-04-04

    Building models of molecular regulatory networks is challenging not just because of the intrinsic difficulty of describing complex biological processes. Writing a model is a creative effort that calls for more flexibility and interactive support than offered by many of today's biochemical model editors. Our model editor MSMB - Multistate Model Builder - supports multistate models created using different modeling styles. MSMB provides two separate advances on existing network model editors. (1) A simple but powerful syntax is used to describe multistate species. This reduces the number of reactions needed to represent certain molecular systems, thereby reducing the complexity of model creation. (2) Extensive feedback is given during all stages of the model creation process on the existing state of the model. Users may activate error notifications of varying stringency on the fly, and use these messages as a guide toward a consistent, syntactically correct model. MSMB default values and behavior during model manipulation (e.g., when renaming or deleting an element) can be adapted to suit the modeler, thus supporting creativity rather than interfering with it. MSMB's internal model representation allows saving a model with errors and inconsistencies (e.g., an undefined function argument; a syntactically malformed reaction). A consistent model can be exported to SBML or COPASI formats. We show the effectiveness of MSMB's multistate syntax through models of the cell cycle and mRNA transcription. Using multistate reactions reduces the number of reactions need to encode many biochemical network models. This reduces the cognitive load for a given model, thereby making it easier for modelers to build more complex models. The many interactive editing support features provided by MSMB make it easier for modelers to create syntactically valid models, thus speeding model creation. Complete information and the installation package can be found at http

  8. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  9. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  10. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  11. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  12. A simplified model exploration research of new anisotropic diffuse radiation model

    International Nuclear Information System (INIS)

    Yao, Wanxiang; Li, Zhengrong; Wang, Xiao; Zhao, Qun; Zhang, Zhigang; Lin, Lin

    2016-01-01

    Graphical abstract: The specific process of measured diffuse radiation data. - Highlights: • Simplified diffuse radiation model is extremely important for solar radiation simulation and energy simulation. • A new simplified anisotropic diffuse radiation model (NSADR model) is proposed. • The accuracy of existing models and NSADR model is compared based on the measured values. • The accuracy of the NSADR model is higher than that of the existing models, and suitable for calculating diffuse radiation. - Abstract: More accurate new anisotropic diffuse radiation model (NADR model) has been proposed, but the parameters and calculation process of NADR model used in the process are complex. So it is difficult to widely used in the simulation software and engineering calculation. Based on analysis of the diffuse radiation model and measured diffuse radiation data, this paper put forward three hypotheses: (1) diffuse radiation from sky horizontal region is concentrated in a very thin layer which is close to the line source; (2) diffuse radiation from circumsolar region is concentrated in the point of the sun; (3) diffuse radiation from orthogonal region is concentrated in the point located at 90 degree angles with the Sun. Based on these hypotheses, NADR model is simplified to a new simplified anisotropic diffuse radiation model (NSADR model). Then the accuracy of NADR model and its simplified model (NSADR model) are compared with existing models based on the measured values, and the result shows that Perez model and its simplified model are relatively accurate among existing models. However, the accuracy of these two models is lower than the NADR model and NSADR model due to neglect the influence of the orthogonal diffuse radiation. The accuracy of the NSADR model is higher than that of the existing models, meanwhile, another advantage is that the NSADR model simplifies the process of solution parameters and calculation. Therefore it is more suitable for

  13. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  14. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  15. Biosphere Model Report

    International Nuclear Information System (INIS)

    D. W. Wu

    2003-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  16. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  17. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  19. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  20. Modelling of JET diagnostics using Bayesian Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J. [IPP Greifswald, Greifswald (Germany); Ford, O. [Imperial College, London (United Kingdom); McDonald, D.; Hole, M.; Nessi, G. von; Meakins, A.; Brix, M.; Thomsen, H.; Werner, A.; Sirinelli, A.

    2011-07-01

    The mapping between physics parameters (such as densities, currents, flows, temperatures etc) defining the plasma 'state' under a given model and the raw observations of each plasma diagnostic will 1) depend on the particular physics model used, 2) is inherently probabilistic, from uncertainties on both observations and instrumental aspects of the mapping, such as calibrations, instrument functions etc. A flexible and principled way of modelling such interconnected probabilistic systems is through so called Bayesian graphical models. Being an amalgam between graph theory and probability theory, Bayesian graphical models can simulate the complex interconnections between physics models and diagnostic observations from multiple heterogeneous diagnostic systems, making it relatively easy to optimally combine the observations from multiple diagnostics for joint inference on parameters of the underlying physics model, which in itself can be represented as part of the graph. At JET about 10 diagnostic systems have to date been modelled in this way, and has lead to a number of new results, including: the reconstruction of the flux surface topology and q-profiles without any specific equilibrium assumption, using information from a number of different diagnostic systems; profile inversions taking into account the uncertainties in the flux surface positions and a substantial increase in accuracy of JET electron density and temperature profiles, including improved pedestal resolution, through the joint analysis of three diagnostic systems. It is believed that the Bayesian graph approach could potentially be utilised for very large sets of diagnostics, providing a generic data analysis framework for nuclear fusion experiments, that would be able to optimally utilize the information from multiple diagnostics simultaneously, and where the explicit graph representation of the connections to underlying physics models could be used for sophisticated model testing. This

  1. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  2. Business Models and Business Model Innovation

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Saebi, Tina

    2018-01-01

    While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...

  3. The bumper bundle book of modelling NLP modelling made simple

    CERN Document Server

    Burgess, Fran

    2014-01-01

    A Neurolinguistic Programming textbook which focusses on the core activity of NLP - modelling. It covers the thinking behind NLP modelling, presents an extensive range of modelling methodologies and skills, offers applications of modelling, and provides specific details for model and technique construction.

  4. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  5. Ecological models and pesticide risk assessment: current modeling practice.

    Science.gov (United States)

    Schmolke, Amelie; Thorbek, Pernille; Chapman, Peter; Grimm, Volker

    2010-04-01

    Ecological risk assessments of pesticides usually focus on risk at the level of individuals, and are carried out by comparing exposure and toxicological endpoints. However, in most cases the protection goal is populations rather than individuals. On the population level, effects of pesticides depend not only on exposure and toxicity, but also on factors such as life history characteristics, population structure, timing of application, presence of refuges in time and space, and landscape structure. Ecological models can integrate such factors and have the potential to become important tools for the prediction of population-level effects of exposure to pesticides, thus allowing extrapolations, for example, from laboratory to field. Indeed, a broad range of ecological models have been applied to chemical risk assessment in the scientific literature, but so far such models have only rarely been used to support regulatory risk assessments of pesticides. To better understand the reasons for this situation, the current modeling practice in this field was assessed in the present study. The scientific literature was searched for relevant models and assessed according to nine characteristics: model type, model complexity, toxicity measure, exposure pattern, other factors, taxonomic group, risk assessment endpoint, parameterization, and model evaluation. The present study found that, although most models were of a high scientific standard, many of them would need modification before they are suitable for regulatory risk assessments. The main shortcomings of currently available models in the context of regulatory pesticide risk assessments were identified. When ecological models are applied to regulatory risk assessments, we recommend reviewing these models according to the nine characteristics evaluated here. (c) 2010 SETAC.

  6. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  7. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  8. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    Directory of Open Access Journals (Sweden)

    Anna Gomes

    Full Text Available Gentamicin shows large variations in half-life and volume of distribution (Vd within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1 creating an optimal model for endocarditis patients; and 2 assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE and Median Absolute Prediction Error (MDAPE were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358 renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076 L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68% as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37% and standard (MDPE -0.90%, MDAPE 4.82% models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to

  9. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  10. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  11. Methodology for geometric modelling. Presentation and administration of site descriptive models; Metodik foer geometrisk modellering. Presentation och administration av platsbeskrivande modeller

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan [Golder Associates (Sweden)

    2001-03-01

    This report presents a methodology to construct, visualise and present geoscientific descriptive models based on data from the site investigations, which the SKB currently performs, to build an underground nuclear waste disposal facility in Sweden. It is designed for interaction with SICADA (SKB:s site characterisation database) and RVS (SKB:s Rock Visualisation System). However, the concepts of the methodology are general and can be used with other tools capable of handling 3D geometries and parameters. The descriptive model is intended to be an instrument where site investigation data from all disciplines are put together to form a comprehensive visual interpretation of the studied rock mass. The methodology has four main components: 1. Construction of a geometrical model of the interpreted main structures at the site. 2. Description of the geoscientific characteristics of the structures. 3. Description and geometrical implementation of the geometric uncertainties in the interpreted model structures. 4. Quality system for the handling of the geometrical model, its associated database and some aspects of the technical auditing. The geometrical model forms a basis for understanding the main elements and structures of the investigated site. Once the interpreted geometries are in place in the model, the system allows for adding descriptive and quantitative data to each modelled object through a system of intuitive menus. The associated database allows each geometrical object a complete quantitative description of all geoscientific disciplines, variabilities, uncertainties in interpretation and full version history. The complete geometrical model and its associated database of object descriptions are to be recorded in a central quality system. Official, new and old versions of the model are administered centrally in order to have complete quality assurance of each step in the interpretation process. The descriptive model is a cornerstone in the understanding of the

  12. Coupled model of INM-IO global ocean model, CICE sea ice model and SCM OIAS framework

    Science.gov (United States)

    Bayburin, Ruslan; Rashit, Ibrayev; Konstantin, Ushakov; Vladimir, Kalmykov; Gleb, Dyakonov

    2015-04-01

    Status of coupled Arctic model of ocean and sea ice is presented. Model consists of INM IO global ocean component of high resolution, Los Alamos National Laboratory CICE sea ice model and a framework SCM OIAS for the ocean-ice-atmosphere-land coupled modeling on massively-parallel architectures. Model is currently under development at the Institute of Numerical Mathematics (INM), Hydrometeorological Center (HMC) and P.P. Shirshov Institute of Oceanology (IO). Model is aimed at modeling of intra-annual variability of hydrodynamics in Arctic and. The computational characteristics of the world ocean-sea ice coupled model governed by SCM OIAS are presented. The model is parallelized using MPI technologies and currently can use efficiently up to 5000 cores. Details of programming implementation, computational configuration and physical phenomena parametrization are analyzed in terms of intercoupling complex. Results of five year computational experiment of sea ice, snow and ocean state evolution in Arctic region on tripole grid with horizontal resolution of 3-5 kilometers, closed by atmospheric forcing field from repeating "normal" annual course taken from CORE1 experiment data base are presented and analyzed in terms of the state of vorticity and warm Atlantic water expansion.

  13. Models of breast cancer: quo vadis, animal modeling?

    International Nuclear Information System (INIS)

    Wagner, Kay-Uwe

    2004-01-01

    Rodent models for breast cancer have for many decades provided unparalleled insights into cellular and molecular aspects of neoplastic transformation and tumorigenesis. Despite recent improvements in the fidelity of genetically engineered mice, rodent models are still being criticized by many colleagues for not being 'authentic' enough to the human disease. Motives for this criticism are manifold and range from a very general antipathy against the rodent model system to well-founded arguments that highlight physiological variations between species. Newly proposed differences in genetic pathways that cause cancer in humans and mice invigorated the ongoing discussion about the legitimacy of the murine system to model the human disease. The present commentary intends to stimulate a debate on this subject by providing the background about new developments in animal modeling, by disputing suggested limitations of genetically engineered mice, and by discussing improvements but also ambiguous expectations on the authenticity of xenograft models to faithfully mimic the human disease

  14. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  15. Empirical investigation on modeling solar radiation series with ARMA–GARCH models

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Yan, Dong; Zhao, Na; Zhou, Jianzhong

    2015-01-01

    Highlights: • Apply 6 ARMA–GARCH(-M) models to model and forecast solar radiation. • The ARMA–GARCH(-M) models produce more accurate radiation forecasting than conventional methods. • Show that ARMA–GARCH-M models are more effective for forecasting solar radiation mean and volatility. • The ARMA–EGARCH-M is robust and the ARMA–sGARCH-M is very competitive. - Abstract: Simulation of radiation is one of the most important issues in solar utilization. Time series models are useful tools in the estimation and forecasting of solar radiation series and their changes. In this paper, the effectiveness of autoregressive moving average (ARMA) models with various generalized autoregressive conditional heteroskedasticity (GARCH) processes, namely ARMA–GARCH models are evaluated for their effectiveness in radiation series. Six different GARCH approaches, which contain three different ARMA–GARCH models and corresponded GARCH in mean (ARMA–GARCH-M) models, are applied in radiation data sets from two representative climate stations in China. Multiple evaluation metrics of modeling sufficiency are used for evaluating the performances of models. The results show that the ARMA–GARCH(-M) models are effective in radiation series estimation. Both in fitting and prediction of radiation series, the ARMA–GARCH(-M) models show better modeling sufficiency than traditional models, while ARMA–EGARCH-M models are robustness in two sites and the ARMA–sGARCH-M models appear very competitive. Comparisons of statistical diagnostics and model performance clearly show that the ARMA–GARCH-M models make the mean radiation equations become more sufficient. It is recommended the ARMA–GARCH(-M) models to be the preferred method to use in the modeling of solar radiation series

  16. Hydrogeological conceptual model development and numerical modelling using CONNECTFLOW, Forsmark modelling stage 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven (SF GeoLogic AB, Taeby (Sweden)); Hartley, Lee; Jackson, Peter; Roberts, David (Serco TAP (United Kingdom)); Marsic, Niko (Kemakta Konsult AB, Stockholm (Sweden))

    2008-05-15

    Three versions of a site descriptive model (SDM) have been completed for the Forsmark area. Version 0 established the state of knowledge prior to the start of the site investigation programme. Version 1.1 was essentially a training exercise and was completed during 2004. Version 1.2 was a preliminary site description and concluded the initial site investigation work (ISI) in June 2005. Three modelling stages are planned for the complete site investigation work (CSI). These are labelled stage 2.1, 2.2 and 2.3, respectively. An important component of each of these stages is to address and continuously try to resolve discipline-specific uncertainties of importance for repository engineering and safety assessment. Stage 2.1 included an updated geological model for Forsmark and aimed to provide a feedback from the modelling working group to the site investigation team to enable completion of the site investigation work. Stage 2.2 described the conceptual understanding and the numerical modelling of the bedrock hydrogeology in the Forsmark area based on data freeze 2.2. The present report describes the modelling based on data freeze 2.3, which is the final data freeze in Forsmark. In comparison, data freeze 2.3 is considerably smaller than data freeze 2.2. Therefore, stage 2.3 deals primarily with model confirmation and uncertainty analysis, e.g. verification of important hypotheses made in stage 2.2 and the role of parameter uncertainty in the numerical modelling. On the whole, the work reported here constitutes an addendum to the work reported in stage 2.2. Two changes were made to the CONNECTFLOW code in stage 2.3. These serve to: 1) improve the representation of the hydraulic properties of the regolith, and 2) improve the conditioning of transmissivity of the deformation zones against single-hole hydraulic tests. The changes to the modelling of the regolith were made to improve the consistency with models made with the MIKE SHE code, which involved the introduction

  17. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  18. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    International Nuclear Information System (INIS)

    1996-01-01

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues

  19. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.

  20. Rotating universe models

    International Nuclear Information System (INIS)

    Tozini, A.V.

    1984-01-01

    A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt

  1. New isomers in the neutron-rich A∼190 mass region

    International Nuclear Information System (INIS)

    Caamano, M.

    2002-02-01

    Previously unobserved isomeric states in 188 Ta, 190 W, 192 Re, 193 Re, 195 Os, 197 Ir, 198 Ir, 200 Pt, 201 Pt, 202 Pt and 203 Au, with half-lives ranging from 10 ns to 290 μs have been populated and studied using a fragmentation reaction in conjunction with a forward focussing spectrometer. In most cases, this provided the first ever spectroscopic data made available for the nucleus, and 200 Pt presented the first new seniority 4 state, on the basis of γ-γ coincidences, following a fragmentation reaction. Half-lives have been measured and tentative level schemes have been drawn for each isomer, spins and parities being consistent with blocked BCS calculations, hindrance factors, systematics and the relative intensities of γ-rays and X-rays (where possible). Isomeric ratios have been measured, values ranging from 1 % to 64 %. Potential Energy Surface calculations were performed in parallel to the blocked BCS calculations, in order to provide deformation parameters, excitation energies and quasiparticle configurations. Ground state (or lowest level) shape calculations reveal a change from axially symmetric, through triaxial, to spherical shapes across the data set, from 188 Ta to 203 Au, as 208 Pb is approached. Weisskopf hindrance factors provide evidence for the erosion of the goodness of the K-quantum number, compatible with soft or axially symmetric shapes. The prolate-oblate phase transition region, with respect to tungsten, osmium and platinum, shows 195 Os to be the pivotal nucleus in the osmium isotopic chain, with a calculated triaxial ground state. On comparison with the systematics of the region, results obtained for 190 W show evidence for a Z ≤ 74 sub-shell closure, analogous to that at Z = 64. Finally, new isotopes, 167 Tb, 170 Dy and 199 Ir were discovered. (author)

  2. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  3. Nonintersecting string model and graphical approach: equivalence with a Potts model

    International Nuclear Information System (INIS)

    Perk, J.H.H.; Wu, F.Y.

    1986-01-01

    Using a graphical method the authors establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced lattice with that of a q 2 -state Potts model on a relaxed lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagome lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q greater than 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model the authors discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases

  4. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  6. Simulation Model of Membrane Gas Separator Using Aspen Custom Modeler

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong-keun [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of); Shin, Gahui; Yun, Jinwon; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-12-15

    Membranes are used to separate pure gas from gas mixtures. In this study, three different types of mass transport through a membrane were developed in order to investigate the gas separation capabilities of a membrane. The three different models typically used are a lumped model, a multi-cell model, and a discretization model. Despite the multi-cell model producing similar results to a discretization model, the discretization model was selected for this investigation, due to the cell number dependence of a multi-cell model. The mass transport model was then used to investigate the effects of pressure difference, flow rate, total exposed area, and permeability. The results showed that the pressure difference increased with the stage cut, but the selectivity was a trade-off for the increasing pressure difference. Additionally, even though permeability is an important parameter, the selectivity and stage cut of the membrane converged as permeability increased.

  7. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...

  8. Simple Models for the Dynamic Modeling of Rotating Tires

    Directory of Open Access Journals (Sweden)

    J.C. Delamotte

    2008-01-01

    Full Text Available Large Finite Element (FE models of tires are currently used to predict low frequency behavior and to obtain dynamic model coefficients used in multi-body models for riding and comfort. However, to predict higher frequency behavior, which may explain irregular wear, critical rotating speeds and noise radiation, FE models are not practical. Detailed FE models are not adequate for optimization and uncertainty predictions either, as in such applications the dynamic solution must be computed a number of times. Therefore, there is a need for simpler models that can capture the physics of the tire and be used to compute the dynamic response with a low computational cost. In this paper, the spectral (or continuous element approach is used to derive such a model. A circular beam spectral element that takes into account the string effect is derived, and a method to simulate the response to a rotating force is implemented in the frequency domain. The behavior of a circular ring under different internal pressures is investigated using modal and frequency/wavenumber representations. Experimental results obtained with a real untreaded truck tire are presented and qualitatively compared with the simple model predictions with good agreement. No attempt is made to obtain equivalent parameters for the simple model from the real tire results. On the other hand, the simple model fails to represent the correct variation of the quotient of the natural frequency by the number of circumferential wavelengths with the mode count. Nevertheless, some important features of the real tire dynamic behavior, such as the generation of standing waves and part of the frequency/wavenumber behavior, can be investigated using the proposed simplified model.

  9. BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS

    Directory of Open Access Journals (Sweden)

    Bartolo de Jesús Villar-Hernández

    2015-08-01

    Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-Cuicatlán biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â

  10. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  11. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  12. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  13. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  14. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  15. IHY Modeling Support at the Community Coordinated Modeling Center

    Science.gov (United States)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  16. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  17. Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.

  18. Petroleum Market Model of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule

  19. Modeling influenza-like illnesses through composite compartmental models

    Science.gov (United States)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  20. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  1. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  2. An improved interfacial bonding model for material interface modeling

    Science.gov (United States)

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  3. Modelling freight transport

    NARCIS (Netherlands)

    Tavasszy, L.A.; Jong, G. de

    2014-01-01

    Freight Transport Modelling is a unique new reference book that provides insight into the state-of-the-art of freight modelling. Focusing on models used to support public transport policy analysis, Freight Transport Modelling systematically introduces the latest freight transport modelling

  4. Adhesive contact: from atomistic model to continuum model

    International Nuclear Information System (INIS)

    Fan Kang-Qi; Jia Jian-Yuan; Zhu Ying-Min; Zhang Xiu-Yan

    2011-01-01

    Two types of Lennard-Jones potential are widely used in modeling adhesive contacts. However, the relationships between the parameters of the two types of Lennard-Jones potential are not well defined. This paper employs a self-consistent method to derive the Lennard-Jones surface force law from the interatomic Lennard-Jones potential with emphasis on the relationships between the parameters. The effect of using correct parameters in the adhesion models is demonstrated in single sphere-flat contact via continuum models and an atomistic model. Furthermore, the adhesion hysteresis behaviour is investigated, and the S-shaped force-distance relation is revealed by the atomistic model. It shows that the adhesion hysteresis loop is generated by the jump-to-contact and jump-off-contact, which are illustrated by the S-shaped force-distance curve. (atomic and molecular physics)

  5. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  6. Finsler Geometry Modeling of an Orientation-Asymmetric Surface Model for Membranes

    Science.gov (United States)

    Proutorov, Evgenii; Koibuchi, Hiroshi

    2017-12-01

    In this paper, a triangulated surface model is studied in the context of Finsler geometry (FG) modeling. This FG model is an extended version of a recently reported model for two-component membranes, and it is asymmetric under surface inversion. We show that the definition of the model is independent of how the Finsler length of a bond is defined. This leads us to understand that the canonical (or Euclidean) surface model is obtained from the FG model such that it is uniquely determined as a trivial model from the viewpoint of well definedness.

  7. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  8. Biosphere Model Report

    International Nuclear Information System (INIS)

    D.W. Wu; A.J. Smith

    2004-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  9. Model documentation: Natural Gas Transmission and Distribution Model of the National Energy Modeling System; Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-02-24

    The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).

  10. A multi-model assessment of terrestrial biosphere model data needs

    Science.gov (United States)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  11. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  12. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  13. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  14. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    Science.gov (United States)

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  15. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  16. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  17. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...... the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given....

  18. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  19. Algebraic formulation of collective models. I. The mass quadrupole collective model

    International Nuclear Information System (INIS)

    Rosensteel, G.; Rowe, D.J.

    1979-01-01

    This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model

  20. Modeling of immision from power plants using stream-diffusion model

    International Nuclear Information System (INIS)

    Kanevce, Lj.; Kanevce, G.; Markoski, A.

    1996-01-01

    Analyses of simple empirical and integral immision models, comparing with complex three dimensional differential models is given. Complex differential models needs huge computer power, so they can't be useful for practical engineering calculations. In this paper immision modeling, using stream-diffusion approach is presented. Process of dispersion is divided into two parts. First part is called stream part, it's near the source of the pollutants, and it's presented with defected turbulent jet in wind field. This part finished when the velocity of stream (jet) becomes equal with wind speed. Boundary conditions in the end of the first part, are initial for the second, called diffusion part, which is modeling with tri dimensional diffusion equation. Gradient of temperature, wind speed profile and coefficient of diffusion in this model must not be constants, they can change with the height. Presented model is much simpler than the complete meteorological differential models which calculates whole fields of meteorological parameters. Also, it is more complex and gives more valuable results for dispersion of pollutants from widely used integral and empirical models

  1. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  2. Mathematical models for sleep-wake dynamics: comparison of the two-process model and a mutual inhibition neuronal model.

    Directory of Open Access Journals (Sweden)

    Anne C Skeldon

    Full Text Available Sleep is essential for the maintenance of the brain and the body, yet many features of sleep are poorly understood and mathematical models are an important tool for probing proposed biological mechanisms. The most well-known mathematical model of sleep regulation, the two-process model, models the sleep-wake cycle by two oscillators: a circadian oscillator and a homeostatic oscillator. An alternative, more recent, model considers the mutual inhibition of sleep promoting neurons and the ascending arousal system regulated by homeostatic and circadian processes. Here we show there are fundamental similarities between these two models. The implications are illustrated with two important sleep-wake phenomena. Firstly, we show that in the two-process model, transitions between different numbers of daily sleep episodes can be classified as grazing bifurcations. This provides the theoretical underpinning for numerical results showing that the sleep patterns of many mammals can be explained by the mutual inhibition model. Secondly, we show that when sleep deprivation disrupts the sleep-wake cycle, ostensibly different measures of sleepiness in the two models are closely related. The demonstration of the mathematical similarities of the two models is valuable because not only does it allow some features of the two-process model to be interpreted physiologically but it also means that knowledge gained from study of the two-process model can be used to inform understanding of the behaviour of the mutual inhibition model. This is important because the mutual inhibition model and its extensions are increasingly being used as a tool to understand a diverse range of sleep-wake phenomena such as the design of optimal shift-patterns, yet the values it uses for parameters associated with the circadian and homeostatic processes are very different from those that have been experimentally measured in the context of the two-process model.

  3. Crop rotation modelling-A European model intercomparison

    Czech Academy of Sciences Publication Activity Database

    Kollas, C.; Kersebaum, K. C.; Nendel, C.; Manevski, K.; Müller, C.; Palosuo, T.; Armas-Herrera, C.; Beaudoin, N.; Bindi, M.; Charefeddine, M.; Conradt, T.; Constantin, J.; Eitzinger, J.; Ewert, F.; Ferrise, R.; Gaiser, T.; de Cortazar-Atauri, I. G.; Giglio, L.; Hlavinka, Petr; Hoffman, H.; Hofmann, M.; Launay, M.; Manderscheid, R.; Mary, B.; Mirschel, W.; Moriondo, M.; Olesen, J. E.; Öztürk, I.; Pacholski, A.; Ripoche-Wachter, D.; Roggero, P. P.; Roncossek, S.; Rötter, R. P.; Ruget, F.; Sharif, B.; Trnka, Miroslav; Ventrella, D.; Waha, K.; Wegehenkel, M.; Weigel, H-J.; Wu, L.

    2015-01-01

    Roč. 70, oct (2015), s. 98-111 ISSN 1161-0301 Institutional support: RVO:67179843 Keywords : model ensemble * crop simulation models * catch crop * intermediate crop * treatment * Multi-year Subject RIV: GC - Agronomy Impact factor: 3.186, year: 2015

  4. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  5. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  6. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  7. Transport properties site descriptive model. Guidelines for evaluation and modelling

    International Nuclear Information System (INIS)

    Berglund, Sten; Selroos, Jan-Olof

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of

  8. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  9. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  10. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  11. PEMODELAN DAERAH TANGKAPAN AIR WADUK KELILING DENGAN MODEL SWAT (Keliling Reservoir Catchment Area Modeling Using SWAT Model

    Directory of Open Access Journals (Sweden)

    Teuku Ferijal

    2015-05-01

    Full Text Available This study aimed to model watershed area of Keliling Reservoir using SWAT model. The reservoir is located in Aceh Besar District, Province of Aceh. The model was setup using 90m x 90m digital elevation model, land use data extracted from remote sensing data and soil characteristic obtained from laboratory analysis on soil samples. Model was calibrated using observed daily reservoir volume and the model performance was analyzed using RMSE-observations standard deviation ratio (RSR, Nash-Sutcliffe efficiency (NSE and percent bias (PBIAS. The model delineated the study area into 3,448 Ha having 13 subwatersheds and 76 land units (HRUs. The watershed is mostly covered by forest (53% and grassland (31%. The analysis revealed the 10 most sensitive parameters i.e. GW_DELAY, CN2, REVAPMN, ALPHA_BF, SOL_AWC, GW_REVAP, GWQMN, CH_K2 and ESCO. Model performances were categorized into very good for monthly reservoir volume with ENS 0.95, RSR 0.23, and PBIAS 2.97. The model performance decreased when it used to analyze daily reservoir inflow with ENS 0.55, RSR 0.67, and PBIAS 3.46. Keywords: Keliling Reservoir, SWAT, Watershed   ABSTRAK Penelitian ini bertujuan untuk untuk memodelkan daerah tangkapan air Waduk Keliling dengan menggunakan Model SWAT. Waduk Keliling terletak di Kabupaten Aceh Besar, Propinsi Aceh. Dalam penelitian ini Model SWAT dikembangkan berdasarkan data digital elevasi model resolusi 90 m x90 m, tata guna lahan yang diperoleh dari intepretasi citra satelit dan data soil dari hasil analisa sampel tanah yang diperoleh di daerah penelitian. Model dikalibrasi dengan data volume waduk dan kinerja model dianalisa menggunakan parameter rasio akar rata-rata kuadrat error dan standard deviasi observasi (RSR, efesiensi Nash-Sutcliffe (NSE dan persentase bias (PBIAS. Hasil deleniasi untuk daerah penelitian menghasilkan suatu DAS dengan luas 3,448 Ha dan memiliki 13 Sub DAS yang dikelompokkan menjadi 76 unit lahan. Sebagian besar wilayah study

  12. CPsup(N-1) model: a toy model for QCD

    International Nuclear Information System (INIS)

    Cant, R.J.; Davis, A.C.

    1979-01-01

    The authors examine the CP 2 sup(N-1) models and discuss their relevance as toy models for QCD 4 . Specifically, they study the role of instantons, theta vacua, and confinement in the 1/N expansion. The results, and comparisons with other two-dimensional models, suggest that most of the interesting features of these models are peculiarities of two-dimensional space-time and cannot be expected to reappear in QCD 4 . (Auth.)

  13. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  14. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  15. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Science.gov (United States)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  16. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  17. SEP modeling based on global heliospheric models at the CCMC

    Science.gov (United States)

    Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Bain, H. M.; Schwadron, N.; Gorby, M.; Li, Y.; Lee, K.; Zeitlin, C.; Jian, L. K.; Lee, C. O.; Mewaldt, R. A.; Galvin, A. B.

    2017-12-01

    Heliospheric models provide contextual information of conditions in the heliosphere, including the background solar wind conditions and shock structures, and are used as input to SEP models, providing an essential tool for understanding SEP properties. The global 3D MHD WSA-ENLIL+Cone model provides a time-dependent background heliospheric description, into which a spherical shaped hydrodynamic CME can be inserted. ENLIL simulates solar wind parameters and additionally one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. An accurate representation of the background solar wind is necessary for simulating transients. ENLIL simulations also drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and the Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al. 2010). The Community Coordinated Modeling Center (CCMC) is in the process of making these SEP models available to the community and offering a system to run SEP models driven by a variety of heliospheric models available at CCMC. SEPMOD injects protons onto a sequence of observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. The coupled SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. The coupled ENLIL and SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. In this presentation we demonstrate several case studies of SEP event modeling at different observers based on WSA-ENLIL+Cone simulations.

  18. Map algebra and model algebra for integrated model building

    NARCIS (Netherlands)

    Schmitz, O.; Karssenberg, D.J.; Jong, K. de; Kok, J.-L. de; Jong, S.M. de

    2013-01-01

    Computer models are important tools for the assessment of environmental systems. A seamless workflow of construction and coupling of model components is essential for environmental scientists. However, currently available software packages are often tailored either to the construction of model

  19. CCF model comparison

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    2004-04-01

    The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)

  20. Wake modelling combining mesoscale and microscale models

    DEFF Research Database (Denmark)

    Badger, Jake; Volker, Patrick; Prospathospoulos, J.

    2013-01-01

    In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake paramet...

  1. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    Science.gov (United States)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  2. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Science.gov (United States)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  3. STAMINA - Model description. Standard Model Instrumentation for Noise Assessments

    NARCIS (Netherlands)

    Schreurs EM; Jabben J; Verheijen ENG; CMM; mev

    2010-01-01

    Deze rapportage beschrijft het STAMINA-model, dat staat voor Standard Model Instrumentation for Noise Assessments en door het RIVM is ontwikkeld. Het instituut gebruikt dit standaardmodel om omgevingsgeluid in Nederland in kaart te brengen. Het model is gebaseerd op de Standaard Karteringsmethode

  4. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling....... the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  5. The Trimeric Model: A New Model of Periodontal Treatment Planning

    Science.gov (United States)

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  6. Model building

    International Nuclear Information System (INIS)

    Frampton, Paul H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA

  7. Composite hadron models

    International Nuclear Information System (INIS)

    Ogava, S.; Savada, S.; Nakagava, M.

    1983-01-01

    Composite models of hadrons are considered. The main attention is paid to the Sakata, S model. In the framework of the model it is presupposed that proton, neutron and Λ particle are the fundamental particles. Theoretical studies of unknown fundamental constituents of a substance have led to the creation of the quark model. In the framework of the quark model using the theory of SU(6)-symmetry the classification of mesons and baryons is considered. Using the quark model relations between hadron masses, their spins and electromagnetic properties are explained. The problem of three-colour model with many flavours is briefly presented

  8. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  9. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  10. Model Sistem Informasi Manajemen Sekolah Berbasiskan Notasi Unified Modeling Language

    Directory of Open Access Journals (Sweden)

    Yohannes Kurniawan

    2013-12-01

    Full Text Available Basically the use of integrated information systems can be applied not only for the company, but also education industry, particularly schools. To support business processes at the school, this research would like to describe a conceptual model of information systems using the Unified Modeling Language (UML notationwith "4 +1 View" architectural model. This model is expected to assist analysis and design the whole business processes at school. A conceptual model of the information system can help application developers to easily and clearly understand the school system. By adopting this information system model, schools are able to have effective understanding of management information systems.

  11. An Agent Model Integrating an Adaptive Model for Environmental Dynamics

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    The environments in which agents are used often may be described by dynamical models, e.g., in the form of a set of differential equations. In this paper, an agent model is proposed that can perform model-based reasoning about the environment, based on a numerical (dynamical system) model of the

  12. Model unspecific search in CMS. Model unspecific limits

    Energy Technology Data Exchange (ETDEWEB)

    Knutzen, Simon; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The standard model of particle physics is increasingly challenged by recent discoveries and also by long known phenomena, representing a strong motivation to develop extensions of the standard model. The amount of theories describing possible extensions is large and steadily growing. In this presentation a new approach is introduced, verifying if a given theory beyond the standard model is consistent with data collected by the CMS detector without the need to perform a dedicated search. To achieve this, model unspecific limits on the number of additional events above the standard model expectation are calculated in every event class produced by the MUSiC algorithm. Furthermore, a tool is provided to translate these results into limits on the signal cross section of any theory. In addition to the general procedure, first results and examples are shown using the proton-proton collision data taken at a centre of mass energy of 8 TeV.

  13. Model-Independent Diffs

    DEFF Research Database (Denmark)

    Könemann, Patrick

    just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...

  14. The CAFE model: A net production model for global ocean phytoplankton

    Science.gov (United States)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  15. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    Science.gov (United States)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  16. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  17. Model building

    International Nuclear Information System (INIS)

    Frampton, P.H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA. copyright 1998 American Institute of Physics

  18. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  19. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  20. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  1. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  2. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  3. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Directory of Open Access Journals (Sweden)

    C Brandon Ogbunugafor

    Full Text Available Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL. Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  4. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Science.gov (United States)

    Ogbunugafor, C Brandon; Robinson, Sean P

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  5. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    , these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...

  6. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  7. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    Science.gov (United States)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  8. On the equivalence between sine-Gordon model and Thirring model in the chirally broken phase of the Thirring model

    International Nuclear Information System (INIS)

    Faber, M.; Ivanov, A.N.

    2001-01-01

    We investigate the equivalence between Thirring model and sine-Gordon model in the chirally broken phase of the Thirring model. This is unlike all other available approaches where the fermion fields of the Thirring model were quantized in the chiral symmetric phase. In the path integral approach we show that the bosonized version of the massless Thirring model is described by a quantum field theory of a massless scalar field and exactly solvable, and the massive Thirring model bosonizes to the sine-Gordon model with a new relation between the coupling constants. We show that the non-perturbative vacuum of the chirally broken phase in the massless Thirring model can be described in complete analogy with the BCS ground state of superconductivity. The Mermin-Wagner theorem and Coleman's statement concerning the absence of Goldstone bosons in the 1+1-dimensional quantum field theories are discussed. We investigate the current algebra in the massless Thirring model and give a new value of the Schwinger term. We show that the topological current in the sine-Gordon model coincides with the Noether current responsible for the conservation of the fermion number in the Thirring model. This allows one to identify the topological charge in the sine-Gordon model with the fermion number. (orig.)

  9. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  10. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  11. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  12. On coupling global biome models with climate models

    OpenAIRE

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...

  13. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  14. Template for Conceptual Model Construction: Model Review and Corps Applications

    National Research Council Canada - National Science Library

    Henderson, Jim E; O'Neil, L. J

    2007-01-01

    .... The template will expedite conceptual model construction by providing users with model parameters and potential model components, building on a study team's knowledge and experience, and promoting...

  15. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  16. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  17. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  18. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  19. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    Science.gov (United States)

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  1. Coupling population dynamics with earth system models: the POPEM model.

    Science.gov (United States)

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  2. Applications of the k – ω Model in Stellar Evolutionary Models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yan, E-mail: ly@ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216 (China)

    2017-05-20

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynolds stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  4. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  5. The reservoir model: a differential equation model of psychological regulation.

    Science.gov (United States)

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  6. Development of bubble-induced turbulence model for advanced two-fluid model

    International Nuclear Information System (INIS)

    Hosoi, Hideaki; Yoshida, Hiroyuki

    2011-01-01

    A two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method. The two-fluid model is therefore useful for thermal hydraulic analysis in the large-scale domain such as rod bundles. However, since the two-fluid model includes a lot of constitutive equations verified by use of experimental results, it has problems that the result of analyses depends on accuracy of the constitutive equations. To solve these problems, an advanced two-fluid model has been developed by Japan Atomic Energy Agency. In this model, interface tracking method is combined with two-fluid model to accurately predict large interface structure behavior. Liquid clusters and bubbles larger than a computational cell are calculated using the interface tracking method, and those smaller than the cell are simulated by the two-fluid model. The constitutive equations to evaluate the effects of small bubbles or droplets on two-phase flow are also required in the advanced two-fluid model, just as with the conventional two-fluid model. However, the dependency of small bubbles and droplets on two-phase flow characteristics is relatively small, and fewer experimental results are required to verify the characteristics of large interface structures. Turbulent dispersion force model is one of the most important constitutive equations for the advanced two-fluid model. The turbulent dispersion force model has been developed by many researchers for the conventional two-fluid model. However, existing models implicitly include the effects of large bubbles and the deformation of bubbles, and are unfortunately not applicable to the advanced two-fluid model. In the previous study, the authors suggested the turbulent dispersion force model based on the analogy of Brownian motion. And the authors improved the turbulent dispersion force model in consideration of bubble-induced turbulence to improve the analysis results for small

  7. Modeling environmental policy

    International Nuclear Information System (INIS)

    Martin, W.E.; McDonald, L.A.

    1997-01-01

    The eight book chapters demonstrate the link between the physical models of the environment and the policy analysis in support of policy making. Each chapter addresses an environmental policy issue using a quantitative modeling approach. The volume addresses three general areas of environmental policy - non-point source pollution in the agricultural sector, pollution generated in the extractive industries, and transboundary pollutants from burning fossil fuels. The book concludes by discussing the modeling efforts and the use of mathematical models in general. Chapters are entitled: modeling environmental policy: an introduction; modeling nonpoint source pollution in an integrated system (agri-ecological); modeling environmental and trade policy linkages: the case of EU and US agriculture; modeling ecosystem constraints in the Clean Water Act: a case study in Clearwater National Forest (subject to discharge from metal mining waste); costs and benefits of coke oven emission controls; modeling equilibria and risk under global environmental constraints (discussing energy and environmental interrelations); relative contribution of the enhanced greenhouse effect on the coastal changes in Louisiana; and the use of mathematical models in policy evaluations: comments. The paper on coke area emission controls has been abstracted separately for the IEA Coal Research CD-ROM

  8. Systemic resilience model

    International Nuclear Information System (INIS)

    Lundberg, Jonas; Johansson, Björn JE

    2015-01-01

    It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies

  9. Differential Topic Models.

    Science.gov (United States)

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  10. Making sense to modelers: Presenting UML class model differences in prose

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present an...... by a controlled experiment that tests three alternatives to presenting model differences. Our findings support our claim that the approach presented here is superior to EMF Compare.......Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present...... an alternative approach where sets of low-level model differences are abstracted into high-level model differences that lend themselves to being presented textually. This format is informed by an explorative survey to elicit the change descriptions modelers use themselves. Our approach is validated...

  11. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  12. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  13. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    Science.gov (United States)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative

  14. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  15. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  16. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  17. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  18. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    Science.gov (United States)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  19. Validation of community models: 3. Tracing field lines in heliospheric models

    Science.gov (United States)

    MacNeice, Peter; Elliott, Brian; Acebal, Ariel

    2011-10-01

    Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.

  20. Modeling promoter grammars with evolving hidden Markov models

    DEFF Research Database (Denmark)

    Won, Kyoung-Jae; Sandelin, Albin; Marstrand, Troels Torben

    2008-01-01

    MOTIVATION: Describing and modeling biological features of eukaryotic promoters remains an important and challenging problem within computational biology. The promoters of higher eukaryotes in particular display a wide variation in regulatory features, which are difficult to model. Often several...... factors are involved in the regulation of a set of co-regulated genes. If so, promoters can be modeled with connected regulatory features, where the network of connections is characteristic for a particular mode of regulation. RESULTS: With the goal of automatically deciphering such regulatory structures......, we present a method that iteratively evolves an ensemble of regulatory grammars using a hidden Markov Model (HMM) architecture composed of interconnected blocks representing transcription factor binding sites (TFBSs) and background regions of promoter sequences. The ensemble approach reduces the risk...

  1. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  2. Dynamics models and modeling of tree stand development

    Directory of Open Access Journals (Sweden)

    M. V. Rogozin

    2015-04-01

    Full Text Available Brief analysis of scientific works in Russia and in the CIS over the past 100 years. Logical and mathematical models consider the conceptual and show some of the results of their verification. It was found that the models include different laws and the parameters, the sum of which allows you to divide them into four categories: models of static states, development models, models of care for the natural forest and models of cultivation. Each category has fulfilled and fulfills its tasks in economic management. Thus, the model states in statics (table traverse growth played a prominent role in figuring out what may be the most productive (full stands in different regions of the country. However, they do not answer the question of what the initial states lead to the production of complete stands. In a study of the growth of stands used system analysis, and it is observed dominance of works studying static state, snatched from the biological time. Therefore, the real drama of the growth of stands remained almost unexplored. It is no accident there were «chrono-forestry» «plantation forestry» and even «non-traditional forestry», where there is a strong case of a number of new concepts of development stands. That is quite in keeping with Kuhn (Kuhn, 2009 in the forestry crisis began – there were alternative theories and coexist conflicting scientific schools. To develop models of stand development, it is proposed to use a well-known method of repeated observations within 10–20 years, in conjunction with the explanation of the history of the initial density. It mounted on the basis of studying the dynamics of its indicators: the trunk, crown overlap coefficient, the sum of volumes of all crowns and the relative length of the crown. According to these indicators, the researcher selects natural series of development stands with the same initial density. As a theoretical basis for the models it is possible to postulate the general properties of

  3. PORTER S FIVE FORCES MODEL SCOTT MORTON S FIVE FORCES MODEL BAKOS TREACY MODEL ANALYZES STRATEGIC INFORMATION SYSTEMS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Indra Gamayanto

    2004-01-01

    Full Text Available Wollongong City Council (WCC is one of the most progressive and innovative local government organizations in Australia. Wollongong City Council use Information Technology to gain the competitive advantage and to face a global economy in the future. Porter's Five Force model is one of the models that can be using at Wollongong City Council because porter's five Forces model has strength in relationship between buyer and suppliers (Bargaining power of suppliers and bargaining power of buyers. Other model such as Scott Morton's Five Forces model has strength to analyze the social impact factor, so to gain competitive advantage in the future and have a good IT/IS strategic planning; this model can be use also. Bakos & Treacy model almost the same as Porter's model but Bakos & Treacy model can also be applying into Wollongong City Council to improve the capability in Transforming organization, efficiency, and effectiveness.

  4. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  5. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  6. Can the Stephani model be an alternative to FRW accelerating models?

    International Nuclear Information System (INIS)

    Godlowski, Wlodzimierz; Stelmach, Jerzy; Szydlowski, Marek

    2004-01-01

    A class of Stephani cosmological models as a prototype of a non-homogeneous universe is considered. The non-homogeneity can lead to accelerated evolution, which is now observed from the SNe Ia data. Three samples of type Ia supernovae obtained by Perlmutter et al, Tonry et al and Knop et al are taken into account. Different statistical methods (best fits as well as maximum likelihood method) to obtain estimation for the model parameters are used. The Stephani model is considered as an alternative to the ΛCDM model in the explanation of the present acceleration of the universe. The model explains the acceleration of the universe at the same level of accuracy as the ΛCDM model (χ 2 statistics are comparable). From the best fit analysis it follows that the Stephani model is characterized by a higher value of density parameter Ω m0 than the ΛCDM model. It is also shown that the model is consistent with the location of CMB peaks

  7. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  8. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Rahgoshay, M.; Hashemi-Tilehnoee, M.

    2012-01-01

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  9. Towards a standard model for research in agent-based modeling and simulation

    Directory of Open Access Journals (Sweden)

    Nuno Fachada

    2015-11-01

    Full Text Available Agent-based modeling (ABM is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. ABMs are very sensitive to implementation details. Thus, it is very easy to inadvertently introduce changes which modify model dynamics. Such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. In this paper, we present PPHPC, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. This paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output, also providing a canonical implementation of PPHPC. The paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their ABMs.

  10. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  11. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  12. STEREOMETRIC MODELLING

    Directory of Open Access Journals (Sweden)

    P. Grimaldi

    2012-07-01

    Full Text Available These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : – the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program; – the shot visualization in two distinct windows – the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  13. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  14. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  15. A model for photothermal responses of flowering in rice. II. Model evaluation.

    NARCIS (Netherlands)

    Yin, X.; Kropff, M.J.; Nakagawa, H.; Horie, T.; Goudriaan, J.

    1997-01-01

    A detailed nonlinear model, the 3s-Beta model, for photothermal responses of flowering in rice (Oryza sativa L.) was evaluated for predicting rice flowering date in field conditions. This model was compared with other three models: a three-plane linear model and two nonlinear models, viz, the

  16. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    Science.gov (United States)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  17. Limits with modeling data and modeling data with limits

    Directory of Open Access Journals (Sweden)

    Lionello Pogliani

    2006-01-01

    Full Text Available Modeling of the solubility of amino acids and purine and pyrimidine bases with a set of sixteen molecular descriptors has been thoroughly analyzed to detect and understand the reasons for anomalies in the description of this property for these two classes of compounds. Unsatisfactory modeling can be ascribed to incomplete collateral data, i.e, to the fact that there is insufficient data known about the behavior of these compounds in solution. This is usually because intermolecular forces cannot be modeled. The anomalous modeling can be detected from the rather large values of the standard deviation of the estimates of the whole set of compounds, and from the unsatisfactory modeling of some of the subsets of these compounds. Thus the detected abnormalities can be used (i to get an idea about weak intermolecular interactions such as hydration, self-association, the hydrogen-bond phenomena in solution, and (ii to reshape the molecular descriptors with the introduction of parameters that allow better modeling. This last procedure should be used with care, bearing in mind that the solubility phenomena is rather complex.

  18. Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2015-01-01

    Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....

  19. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    Science.gov (United States)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  20. Building generic anatomical models using virtual model cutting and iterative registration

    Directory of Open Access Journals (Sweden)

    Hallgrímsson Benedikt

    2010-02-01

    Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and

  1. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  2. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  3. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  4. Modelling open pit shovel-truck systems using the Machine Repair Model

    Energy Technology Data Exchange (ETDEWEB)

    Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine

    2007-08-15

    Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.

  5. CFD Wake Modelling with a BEM Wind Turbine Sub-Model

    Directory of Open Access Journals (Sweden)

    Anders Hallanger

    2013-01-01

    Full Text Available Modelling of wind farms using computational fluid dynamics (CFD resolving the flow field around each wind turbine's blades on a moving computational grid is still too costly and time consuming in terms of computational capacity and effort. One strategy is to use sub-models for the wind turbines, and sub-grid models for turbulence production and dissipation to model the turbulent viscosity accurately enough to handle interaction of wakes in wind farms. A wind turbine sub-model, based on the Blade Momentum Theory, see Hansen (2008, has been implemented in an in-house CFD code, see Hallanger et al. (2002. The tangential and normal reaction forces from the wind turbine blades are distributed on the control volumes (CVs at the wind turbine rotor location as sources in the conservation equations of momentum. The classical k-epsilon turbulence model of Launder and Spalding (1972 is implemented with sub-grid turbulence (SGT model, see Sha and Launder (1979 and Sand and Salvesen (1994. Steady state CFD simulations were compared with flow and turbulence measurements in the wake of a model scale wind turbine, see Krogstad and Eriksen (2011. The simulated results compared best with experiments when stalling (boundary layer separation on the wind turbine blades did not occur. The SGT model did improve turbulence level in the wake but seems to smear the wake flow structure. It should be noted that the simulations are carried out steady state not including flow oscillations caused by vortex shedding from tower and blades as they were in the experiments. Further improvement of the simulated velocity defect and turbulence level seems to rely on better parameter estimation to the SGT model, improvements to the SGT model, and possibly transient- instead of steady state simulations.

  6. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  7. Modeling Historical Land Cover and Land Use: A Review fromContemporary Modeling

    Directory of Open Access Journals (Sweden)

    Laura Alfonsina Chang-Martínez

    2015-09-01

    Full Text Available Spatially-explicit land cover land use change (LCLUC models are becoming increasingly useful tools for historians and archaeologists. Such kinds of models have been developed and used by geographers, ecologists and land managers over the last few decades to carry out prospective scenarios. In this paper, we review historical models to compare them with prospective models, with the assumption that the ample experience gained in the development of models of prospective simulation can benefit the development of models having as their objective the simulation of changes that happened in the past. The review is divided into three sections: in the first section, we explain the functioning of contemporary LCLUC models; in the second section, we analyze historical LCLUC models; in the third section, we compare the former two types of models, and finally, we discuss the contributions to historical LCLUC models of contemporary LCLUC models.

  8. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  9. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  10. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  11. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  12. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  13. On coupling global biome models with climate models

    International Nuclear Information System (INIS)

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)

  14. Models in Science Education: Applications of Models in Learning and Teaching Science

    Science.gov (United States)

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  15. Weighted-indexed semi-Markov models for modeling financial returns

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo

    2012-01-01

    In this paper we propose a new stochastic model based on a generalization of semi-Markov chains for studying the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted-indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series such as the first-passage-time distributions and the persistence of volatility. The model is applied to data from the Italian and German stock markets from 1 January 2007 until the end of December 2010. (paper)

  16. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  17. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  18. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  19. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  20. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  2. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  3. Model documentation renewable fuels module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA`s ongoing mission to provide analytical and forecasting information systems.

  4. Model documentation renewable fuels module of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA's ongoing mission to provide analytical and forecasting information systems

  5. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  6. OPEC model : adjustment or new model

    International Nuclear Information System (INIS)

    Ayoub, A.

    1994-01-01

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  7. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  8. Metamodeling for Business Model Design : Facilitating development and communication of Business Model Canvas (BMC) models with an OMG standards-based metamodel.

    OpenAIRE

    Hauksson, Hilmar

    2013-01-01

    Interest for business models and business modeling has increased rapidly since the mid-1990‘s and there are numerous approaches used to create business models. The business model concept has many definitions which can lead to confusion and slower progress in the research and development of business models. A business model ontology (BMO) was created in 2004 where the business model concept was conceptualized based on an analysis of existing literature. A few years later the Business Model Can...

  9. Thermal Models of the Niger Delta: Implications for Charge Modelling

    International Nuclear Information System (INIS)

    Ejedawe, J.

    2002-01-01

    There are generally three main sources of temperature data-BHT data from log headers, production temperature data, and continuo's temperature logs. Analysis of continuous temperature profiles of over 100 wells in the Niger Delta two main thermal models (single leg and dogleg) are defined with occasional occurrence of a modified dogleg model.The dogleg model is characterised by a shallow interval of low geothermal gradient ( 3.0.C/100m). This is characteristically developed onshore area is simple, requiring only consideration of heat transients, modelling in the onshore require modelling programmes with built in modules to handle convective heat flow dissipation in the shallow layer. Current work around methods would involve tweaking of thermal conductivity values to mimic the underlying heat flow process effects, or heat flow mapping above and below the depth of gradient change. These methods allow for more realistic thermal modelling, hydrocarbon type prediction, and also more accurate prediction of temperature prior to drilling and for reservoir rock properties. The regional distribution of the models also impact on regional hydrocarbon distribution pattern in the Niger Delta

  10. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  11. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  12. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  13. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. VENTILATION MODEL

    International Nuclear Information System (INIS)

    V. Chipman

    2002-01-01

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses

  15. Tracer disposition kinetics in the determination of local cerebral blood flow by a venous equilibrium model, tube model, and distributed model

    International Nuclear Information System (INIS)

    Sawada, Y.; Sugiyama, Y.; Iga, T.; Hanano, M.

    1987-01-01

    Tracer distribution kinetics in the determination of local cerebral blood flow (LCBF) were examined by using three models, i.e., venous equilibrium, tube, and distributed models. The technique most commonly used for measuring LCBF is the tissue uptake method, which was first developed and applied by Kety. The measurement of LCBF with the 14 C-iodoantipyrine (IAP) method is calculated by using an equation derived by Kety based on the Fick's principle and a two-compartment model of blood-tissue exchange and tissue concentration at a single data point. The procedure, in which the tissue is to be in equilibrium with venous blood, will be referred to as the tissue equilibration model. In this article, effects of the concentration gradient of tracer along the length of the capillary (tube model) and the transverse heterogeneity in the capillary transit time (distributed model) on the determination of LCBF were theoretically analyzed for the tissue sampling method. Similarities and differences among these models are explored. The rank order of the LCBF calculated by using arterial blood concentration time courses and the tissue concentration of tracer based on each model were tube model (model II) less than distributed model (model III) less than venous equilibrium model (model I). Data on 14 C-IAP kinetics reported by Ohno et al. were employed. The LCBFs calculated based on model I were 45-260% larger than those in models II or III. To discriminate among three models, we propose to examine the effect of altering the venous infusion time of tracer on the apparent tissue-to-blood concentration ratio (lambda app). A range of the ratio of the predicted lambda app in models II or III to that in model I was from 0.6 to 1.3

  16. Functionalized anatomical models for EM-neuron Interaction modeling

    Science.gov (United States)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  17. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  18. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  19. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  20. Modeling Renewable Penertration Using a Network Economic Model

    Science.gov (United States)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.