Study of fission cross sections induced by nucleons and pions using the cascade-exciton model CEM95
Nucleon and pion-induced fission cross sections at intermediate and at higher energies are important in current nuclear applications, such as accelerator driven-systems (ADS), in medicine, for effects on electronics etc. In the present work, microscopic fission cross sections induced by nucleons and pions are calculated using the cascade-exciton model code CEM95 for different projectile-target combinations; at various energies and the computed cross sections are compared with the experimental data found in literature. A new approach is used to compute the fission cross sections in which a change of the ratio of the level density parameter in fission to neutron emission channels was taken into account with the change in the incident energy of the projectile. We are unable to describe well the fission cross sections without using this new approach. Proton induced fission cross sections are calculated for targets 197Au, 208Pb, 209Bi, 238U and 239Pu in the energy range from 20 MeV to 2000 MeV. Neutron induced fission cross sections are computed for 238U and 239Pu in the energy range from 20 MeV to 200 MeV. Negative pion induced cross sections for fission are calculated for targets 197Au and 208Pb from 50 MeV to 2500 MeV energy range. The calculated cross sections are essential to build a data library file for accelerator driven systems just like was built for conventional nuclear reactors. The computed values exhibited reasonable agreement with the experimental values found in the literature across a wide range of beam energies
Complex Particle and Light Fragment Emission in the Cascade-Exciton Model of Nuclear Reactions
Mashnik, Stepan G.; Sierk, Arnold J.; Gudima, Konstantin K.
2002-01-01
A brief description of our improvements and refinements that led from the CEM95 version of the Cascade-Exciton Model (CEM) code to CEM97 and to CEM2k is given. The increased accuracy and predictive power of the code CEM2k are shown by several examples. To describe fission and light-fragment (heavier than 4He) production, the CEM2k code has been merged with the GEM2 code of Furihata. We present some results on proton-induced fragmentation and fission reactions predicted by this extended versio...
A Abdel-Hafiez; Shaker El-Shater; M F Zaki
2015-04-01
In this study, we have used the cascade exciton model (CEM) to investigate different characteristics of nuclear reactions. Number of nucleon–nucleon collisions in Pb+Pb collisions as a function of impact parameter and rapidity distributions of negative particles from p+Ar and p+Xe interactions at lab = 200 GeV/c have been studied. We could create inclusive spectra of pions for separate charged states from reactions and total neutron multiplicities per primary reaction at 1000 MeV for different thin targets. Also, cross-sections for the reactions 209Bi(p, f) and 209Bi(n, f) were studied. Interactions of 1.0 GeV protons with C, Al, Cu, Sn, and Pb are presented in this study. All the calculated characteristics are compared with other theoretical calculations and compared with the experimental data. CEM shows good agreement with both theoretical and experimental results. In this study, we have used quantum molecular dynamic (QMD) as a theoretical model to compare our results.
The Cascade-Exciton Approach to Nuclear Reactions. (Foundation and Achievements)
The relativistic kinetic equations describing nuclear reactions at intermediate energies are obtained on the dynamical basis. These equations are analyzed and realized in several versions of the Cascade Exciton Model (CEM). The CEM assumes that reactions occur in three stages: the intranuclear cascade, pre-equilibrium and the evaporative ones. A large variety of experimental data on hadron- and photonuclear reactions in the bombarding energy range up to several GeV are analyzed in this approach. The contributions of different pion and photon absorption mechanisms and the relative role of different particle and photon production mechanisms in these reactions are estimated. The CEM describes adequately nuclear reactions at intermediate energies and has one of the best predictive powers as compared to other available modern models. 55 refs., 10 figs., 1 tab
Comparison of Expanded Preequilibrium CEM Model with CEM03.03 and Experimental Data, FY2013
Kerby, Leslie M; Sierk, Arnold J
2014-01-01
Emission of light fragments (LF) from nuclear reactions is an open question. Different reaction mechanisms contribute to their production; the relative roles of each, and how they change with incident energy, mass number of the target, and the type and emission energy of the fragments is not completely understood. None of the available models are able to accurately predict emission of LF from arbitrary reactions. However, the ability to describe production of LF (especially at energies $\\gtrsim 30$ MeV) from many reactions is important for different applications, such as cosmic-ray-induced Single Event Upsets (SEUs), radiation protection, and cancer therapy with proton and heavy-ion beams, to name just a few. The Cascade-Exciton Model (CEM) version 03.03 and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) version 03.03 event generators in Monte Carlo N-Particle Transport Code version 6 (MCNP6) describe quite well the spectra of fragments with sizes up to $^{4}$He across a broad range of target...
Taranenko, Valery; Xu, X George
2009-01-01
Protection of pregnant women and their foetus against external proton irradiations poses a unique challenge. Assessment of foetal dose due to external protons in galactic cosmic rays and as secondaries generated in aircraft walls is especially important during high-altitude flights. This paper reports a set of fluence to absorbed dose conversion coefficients for the foetus and its brain for external monoenergetic proton beams of six standard configurations (the antero-posterior, the postero-anterior, the right lateral, the left lateral, the rotational and the isotropic). The pregnant female anatomical definitions at each of the three gestational periods (3, 6 and 9 months) are based on newly developed RPI-P series of models whose organ masses were matched within 1% with the International Commission on Radiological Protection reference values. Proton interactions and the transport of secondary particles were carefully simulated using the Monte Carlo N-Particle eXtended code (MCNPX) and the phantoms consisting of several million voxels at 3 mm resolution. When choosing the physics models in the MCNPX, it was found that the advanced Cascade-Exciton intranuclear cascade model showed a maximum of 9% foetal dose increase compared with the default model combination at intermediate energies below 5 GeV. Foetal dose results from this study are tabulated and compared with previously published data that were based on simplified anatomy. The comparison showed a strong dependence upon the source geometry, energy and gestation period: the dose differences are typically less than 20% for all sources except ISO where systematically 40-80% of higher doses were observed. Below 200 MeV, a larger discrepancy in dose was found due to the Bragg peak shift caused by different anatomy. The tabulated foetal doses represent the latest and most detailed study to date offering a useful set of data to improve radiation protection dosimetry against external protons. PMID:19246483
Highlights: • We investigated the main quantities determining ADS performance. • We calculated ADS performance such as neutron yield, neutron leakage spectra, heating and neutron and proton spectra in the target and in the beam window. • We used the MCNPX-2.7.0 Monte Carlo code for three-dimensional calculations. - Abstract: The MCNPX code offers options based on physics packages; the Bertini, ISABEL, INCL4 intra-nuclear models, and Dresner, ABLA evaporation–fission models and CEM2k cascade-exciton model. This study analyzes the main quantities determining ADS performance, such as neutron yield, neutron leakage spectra, heating and neutron and proton spectra in the target and in the beam window calculated by the MCNPX-2.5.0 Monte Carlo transport code, which is a combination of LAHET and MCNP codes. The results obtained by simulating different models cited above and implemented in MCNPX are compared with each other. The investigated system is composed of a natural lead cylindrical target and stainless steel (HT9) beam window. The target has been optimized to produce maximum number of neutrons with a radius of 20 cm and 70 cm of height. The target is bombarded with a high intensity linear accelerator by a 1 GeV, 1 mA proton beam. The protons are assumed uniformly distributed across the beam of radius 3 cm, and entering the target through a hole of 5.3 cm radius. The proton beam has an outer radius of 5.3 cm and an inner radius of 5.0 cm. The maximum value of the neutron flux in the target is observed on the axis ∼10 cm below the beam window, where the maximum difference between 7 different models is ∼15%. The total neutron leakage of the target calculated with the Bertini/ABLA is 1.83 × 1017 n/s, and is about 14% higher than the value calculated by the INCL4/Dresner (1.60 × 1017 n/s). Bertini/ABLA calculates top, bottom and side neutron leakage fractions as 20%, 2.3%, 77.6% of the total leakage, respectively, whereas, the calculated fractions are 18
In this study, by using equilibrium and pre-equilibrium reaction mechanisms, neutron-emission spectra produced by (p,xn) reactions for some deformed target nuclei as 165Ho, 181Ta and 232Th have been calculated the bombarding energies up to 22.4 MeV. In the calculations, pre-equilibrium calculations were calculated by using new evaluated hybrid model and geometry dependent hybrid model, full exciton model and cascade exciton model. The reaction equilibrium component was calculated by Weisskopf-Ewing model. The obtained results have been discussed and compared with the available experimental data and found agreement with each other
In this present study, by using equilibrium and pre-equilibrium reaction mechanisms, the (p, n) cross-section values for some target (A = 25–240) nuclei were investigated in a range of 10–40 MeV incident energy. The excitation functions for pre-equilibrium calculations were newly calculated by using hybrid model, geometry dependent hybrid model, full-exciton model, and cascade exciton model. The reaction equilibrium component was calculated with a traditional compound nucleus model developed by Weisskopf–Ewing. Calculation results were compared with the available excitation functions measurements in literature. (author)
Equilibrium and pre-equilibrium emissions in proton-induced reactions on 203,205Tl
A Kaplan; A Aydin; E Tel; B Şarer
2009-02-01
In this study, the excitation functions for the reactions 203Tl(, )203Pb, 203Tl(, 3)203Pb, 203Tl(, 2)202Pb, 205Tl(, 4)202Pb, 203Tl(, 3)201Pb, 205Tl(, 5)201Pb, 203Tl(, 4)200Pb and 205Tl(, 6)200Pb have been calculated using pre-equilibrium and equilibrium reaction mechanisms. Calculated results based on hybrid model, geometry-dependent hybrid model and cascade-exciton model have been compared with the experimental data.
Juel-Christiansen, Carsten
2005-01-01
Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
This last volume in the series of textbooks on environmental isotopes in the hydrological cycle provides an overview of the basic principles of existing conceptual formulations of modelling approaches. While some of the concepts provided in Chapter 2 and Chapter 3 are of general validity for quantitative interpretation of isotope data; the modelling methodologies commonly employed for incorporating isotope data into evaluations specifically related to groundwater systems are given in this volume together with some illustrative examples. Development of conceptual models for quantitative interpretations of isotope data in hydrogeology and the assessment of their limitations and field verification has been given priority in the research and development efforts of the IAEA during the last decade. Several Co-ordinated Research Projects on this specific topic were implemented and results published by the IAEA. Based on these efforts and contributions made by a number of scientists involved in this specific field, the IAEA has published two Technical Documents entitled ''Mathematical models and their applications to isotope studies in groundwater studies -- IAEA TECDOC-777, 1994'' and ''Manual on Mathematical models in isotope hydrogeology -- IAEA TECDOC-910, 1996''. Results of a recently completed Co-ordinated Research Project by the IAEA entitled ''Use of isotopes for analysis of flow and transport dynamics in groundwater systems'' will also soon be published by the IAEA. This is the reason why the IAEA was involved in the co-ordination required for preparation of this volume; the material presented is a condensed overview prepared by some of the scientists that were involved in the above cited IAEA activities. This volume VI providing such an overview was included into the series to make this series self-sufficient in its coverage of the field of Isotope Hydrology. A special chapter on the methodologies and concepts related to geochemical modelling in groundwater
New calculations of excitation functions for some light deformed medical radionuclides
In this study, the new calculations on the excitation functions of 51V(p,n)51Cr, 57Fe(p,n)57Co, 58Fe(p,n)58Co, 75As(p,n)75Se, 81Br (p,n)81Kr, 82Se(p,n)82Br and 85Rb(p,n)85Sr reactions have been carried for incident proton energy up to 50 MeV. In these calculations the pre-equilibrium and equilibrium effects have been investigated. The pre-equilibrium calculations involve hybrid model, geometry dependent hybrid model, the cascade exciton model and full exciton model. Equilibrium effects have been calculated according to Weisskopf-Ewing model. The calculated results have been compared with the experimental data taken from the literature.
(n,p, (n,2n, (n,d, and (n,α cross-section calculations of 16O with 0-40 MeV energy neutrons
Ozdemir Omer Faruk
2015-01-01
Full Text Available Oxygen is one of the elements which interacts with emitted neutrons after fission reactions. Oxygen exists abundantly both in nuclear fuel (UO2 and moderators (H2O. Nuclear reactions of oxygen with neutrons are important in terms of stability of nuclear fuel and neutron economy. In this study, equilibrium and pre-equilibrium models have been used to calculate (n,p, (n,d, (n,2n and (n,α nuclear reaction cross-sections of 16O. In these calculations, neutron incident energy has been taken up to 40 MeV. Hybrid and Standard Weisskopf-Ewing Models in ALICE-2011 program, Weisskopf-Ewing and Full Exciton Models in PCROSS program, and Cascade Exciton Model in CEM03.01 program have been utilized. The calculated results have been compared with experimental and theroretical cross-section data which are obtained from libraries of EXFOR and ENDF/B VII.1.
Muller, Pierre-Alain; Fondement, Frédéric; Baudry, Benoit
2009-01-01
Model-driven engineering and model-based approaches have permeated all branches of software engineering; to the point that it seems that we are using models, as Molière's Monsieur Jourdain was using prose, without knowing it. At the heart of modeling, there is a relation that we establish to represent something by something else. In this paper we review various definitions of models and relations between them. Then, we define a canonical set of relations that can be used to express various ki...
Analysis of Intermediate Energy Photonuclear Reactions
The Cascade-Exciton Model (CEM) of nuclear reactions has been extended to describe photonuclear disintegration at intermediate energies. Using the CEM and the ORNL version of the Intranuclear Cascade Model for incident energies higher than the giant dipole resonance (GDR) region, and a group theory formalism based on the Interacting Boson Model in the GDR region, we have analyzed a variety of data for reactions induced by photons with energies up to ∼ 1.2 GeV and target-nuclei from 12C to 243Am. The contributions of different photon absorption mechanisms and the relative role of different particle production mechanisms in these reactions are discussed. 41 refs., 7 figs., 1 tab
Muller, Pierre-Alain; Fondement, Frédéric; Baudry, Benoit; Combemale, Benoit
2012-01-01
Model-driven engineering and model-based approaches have permeated all branches of software engineering to the point that it seems that we are using models, as Molière's Monsieur Jourdain was using prose, without knowing it. At the heart of modeling, there is a relation that we establish to represent something by something else. In this paper we review various definitions of models and relations between them. Then, we define a canonical set of relations that can be used to express various kin...
The first systematic measurements of neutron-induced inclusive production of protons, deuterons, tritons and charged pions on carbon, copper, and bismuth in the bombarding energy range of 300-580 MeV and in the angular interval from 51 deg to 165 deg have been analyzed in the framework of the cascade-exciton model. The role of single-particle scattering, the effects of rescattering, the pre-equilibrium emission and 'coalescence' mechanism in particle production in the cumulative (i.e., kinematically - forbidden for quasi-free intranuclear projectile-nucleon collisions) and noncumulative regions are discussed. A week sensitivity of the inclusive distributions to the specific reaction mechanisms and a need of correlation and polarization measurements are noted. 27 refs.; 12 figs.; 1 tab
Blouin A.; Combemale B.; Baudry B.; Beaudoux O.
2011-01-01
International audience Among model comprehension tools, model slicers are tools that extract a subset from a model, for a specific purpose. Model slicers are tools that let modelers rapidly gather relevant knowledge from large models. However, existing slicers are dedicated to one modeling language. This is an issue when we observe that new domain specific modeling languages (DSMLs), for which we want slicing abilities, are created almost on a daily basis. This paper proposes the Kompren l...
Onatski, Alexei; Williams, Noah
2003-01-01
Recently there has been much interest in studying monetary policy under model uncertainty. We develop methods to analyze different sources of uncertainty in one coherent structure useful for policy decisions. We show how to estimate the size of the uncertainty based on time series data, and incorporate this uncertainty in policy optimization. We propose two different approaches to modeling model uncertainty. The first is model error modeling, which imposes additional structure on the errors o...
Production of Energetic Light Fragments in CEM, LAQGSM, and MCNP6
Mashnik, Stepan G; Gudima, Konstantin K; Sierk, Arnold J; Bull, Jeffrey S; James, Michael R
2016-01-01
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte-Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi break-up, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi break-up model and choose the best option for these models. Then, we extend the modified exciton model (MEM) used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A < ...
Baden-Fuller, C.; Morgan, M S
2010-01-01
Drawing on research undertaken in the history and philosophy of science, with particular reference to the extensive literature which discusses the use of models in biology and economics, we explore the question ‘Are Business Models useful?’ We point out that they act as various forms of model: to provide means to describe and classify businesses; to operate as sites for scientific investigation; and to act as recipes for creative managers. We argue that studying business models as models is r...
Production of energetic light fragments in spallation reactions
Different reaction mechanisms contribute to the production of light fragments (LF) from nuclear reactions. Available models cannot accurately predict emission of LF from arbitrary reactions. The cascade-exciton model (CEM) and the Los Alamos version of the quark-gluon string model (LAQGSM), as implemented in the CEM03.03 and LAQGSM03.03 event generators used in the Los Alamos Monte Carlo transport code MCNP6, describe quite well the spectra of fragments with sizes up to 4He across a broad range of target masses and incident energies. However, they do not predict high-energy tails for LF heavier than 4He. The standard versions of CEM and LAQGSM do not account for preequilibrium emission of LF larger than 4He. The aim of our work is to extend the preequilibrium model to include such processes. We do this by including the emission of fragments heavier than 4He at the preequilibrium stage, and using an improved version of the Fermi Break-up model, providing improved agreement with various experimental data
MARS code developments, benchmarking and applications
Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring
MARS code developments, benchmarking and applications
Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft pion production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electromagnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring. (author)
Production of Energetic Light Fragments in Spallation Reactions
Mashnik Stepan G.
2014-03-01
Full Text Available Different reaction mechanisms contribute to the production of light fragments (LF from nuclear reactions. Available models cannot accurately predict emission of LF from arbitrary reactions. However, the emission of LF is important formany applications, such as cosmic-ray-induced single event upsets, radiation protection, and cancer therapy with proton and heavy-ion beams, to name just a few. The cascade-exciton model (CEM and the Los Alamos version of the quark-gluon string model (LAQGSM, as implemented in the CEM03.03 and LAQGSM03.03 event generators used in the Los Alamos Monte Carlo transport code MCNP6, describe quite well the spectra of fragments with sizes up to 4He across a broad range of target masses and incident energies. However, they do not predict high-energy tails for LF heavier than 4He. The standard versions of CEM and LAQGSM do not account for preequilibrium emission of LF larger than 4He. The aim of our work is to extend the preequilibrium model to include such processes. We do this by including the emission of fragments heavier than 4He at the preequilibrium stage, and using an improved version of the Fermi Break-up model, providing improved agreement with various experimental data.
Model Validation and Model Error Modeling
Ljung, Lennart
1999-01-01
To validate an estimated model and to have a good understanding of its reliability is a central aspect of System Identification. This contribution discusses these aspects in the light of model error models that are explicit descriptions of the model error. A model error model is implicitly present in most model validation methods, so the concept is more of a representation form than a set of new techniques. Traditional model validation is essentially a test of whether the confidence region of...
Calculation of photo-nuclear reaction cross sections for 16O
Arasoglu Ali
2015-01-01
Full Text Available Because of the high thermal expansion coefficient of uranium, the fuel used in nuclear power plants is usually in the form of UO2 which has ceramic structure and small thermal expansion coefficient. UO2 include one uranium atom and two oxygen atoms. After fission progress, total energy values of emitted gamma are about 14 MeV. This gamma energy may cause transmutation of 16O isotopes. Transmutation of 16O isotopes changes physical properties of nuclear fuel. Due to above explanations, it is very important to calculate photo-nuclear reaction cross sections of 16O. In this study; for (γ,p, (γ,np, (γ,n and (γ,2n reactions of 16O, photo-nuclear reaction cross-sections were calculated using different models for pre-equilibrium and equilibrium effects. Taking incident gamma energy values up to 40 MeV, Hybrid and Cascade Exciton Models were used for pre-equilibrium calculations and Weisskopf-Ewing (Equilibrium Model was used for equilibrium model calculations. Calculation results were compared with experimental and theoretical data. While experimental results were obtained from EXFOR, TENDL-2013, JENDL/PD-2004 and ENDF/B VII.1 data base were used to get theoretical results.
Several nuclear model codes were applied to calculations of nuclear data in the energy region from 10MeV to 1GeV. At energies up to 100MeV the nuclear theory code GNASH was used for nuclear data calculation for neutrons incident for on 238U, 233-236U, 238-242Pu, 237Np, 232Th, 241-243Am and 242-247Cm. At energies from 100MeV to 1GeV the intranuclear cascade exciton model including the fission process was applied to calculations of protons and neutrons with 233U, 235U, 238U, 232Th, 232Pa, 237Np, 238Np, 239Pu, 241Am, 242Am and 242-248Cm. Determination of parameter systematics was a major effort in the present work that was aimed at improving the predictive capability of the models used. An emphasis was placed upon a simultaneous analysis of data for a variety of reaction channels for the nuclei considered, as well as of data that are available for nearby nuclei or for other incident particles. Comparisons with experimental data available on multiple reaction cross sections, isotope yields, fission cross sections, particle multiplicities, secondary particle spectra, and double differential cross sections indicate that the calculations reproduce the trends, and often the details, of the measurements data. (author) 82 refs
The code CEM92M is intended for calculation of reaction, elastic, fission and total cross sections; nuclear fissilities; excitation functions; nuclide distributions; energy and angular spectra; double differential cross sections; mean multiplicities, mean energies and production cross sections for n, p, d, t, 3 He, 4 He, π-, π0 and π+ emitted in nucleon-and pion-induced reactions using the Cascade-Exciton Model (CEM) of nuclear reactions. The main assumptions of the CEM are given. A large variety of nucleon-, pion- and γ-induced reactions in the bombarding energy range of 0-3000 MeV have been analyzed and evaluated with it. It is shown that the CEM adequately describes nuclear reactions at intermediate energies, has one of the best predictive powers as compared to other available modern models and may be used for evaluation of medium energy nuclear data for science and applications. Further development and improvement of the code CEM92M are discussed. 21 refs., 7 figs
Validation and verification of MCNP6 as a new simulation tool useful for medical applications
Mashnik, Stepan G [Los Alamos National Laboratory
2011-01-06
MCNP6, the latest and most advanced LANL transport code, representing a merger of MCNP5 and MCNPX has been Validated and Verified (V&V) against different experimental data and results by other codes relevant to medical applications. In the present work, we V&V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes well data of interest for medical applications measured on both thin and thick targets and agrees very well with similar results obtained with other codes; MCNP6 may be a very useful tool for medical applications We plan to make MCNP6 available to the public via RSICC at Oak Ridge in the middle of 2011 but we are allowed to provide it to friendly US Beta-users outside LANL already now.
232Th and 238U neutron emission cross section calculations and analysis of experimental data
In this study, pre-equilibrium neutron-emission spectra produced by (n,xn) reactions on nuclei 232Th and 238U have been calculated. Angle-integrated cross sections in neutron induced reactions on targets 232Th and 238U have been calculated at the bombarding energies up to 18 MeV. We have investigated multiple pre-equilibrium matrix element constant from internal transition for 232Th (n,xn) neutron emission spectra. In the calculations, the geometry dependent hybrid model and the cascade exciton model including the effects of pre-equilibrium have been used. In addition, we have described how multiple pre-equilibrium emissions can be included in the Feshbach-Kerman-Koonin (FKK) fully quantum-mechanical theory. By analyzing (n,xn) reaction on 232Th and 238U, with the incident energy from 2 Me V to 18 Me V, the importance of multiple pre-equilibrium emission can be seen cleady. All calculated results have been compared with experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other
MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies
Mashnik, Stepan Georgievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kerby, Leslie Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-22
MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 using CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.
Poulsen, Helle
1996-01-01
This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants.......This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants....
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
Cameron, Ian; Gani, Rafiqul
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...
Daniel J Kliebenstein
2012-01-01
Models of myriad forms are rapidly becoming central to biology. This ranges from statistical models that are fundamental to the interpretation of experimental results to ODE models that attempt to describe the results in a mechanistic format. Models will be more and more essential to biologists but this growing importance requires all model users to become more sophisticated about what is in a model and how that limits the usability of the model. This review attempts to relay the potential pi...
Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si
There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.
Anyone who worries that physicists are running out of interesting challenges to tackle and important problems to solve should read the two, very different feature articles in this issue. In 'Climate change: complexity in action', Klaus Hasselmann and colleagues write about the challenges of including economic and political dimensions in computer simulations of climate change. It is hard to imagine a physics-based topic that has a greater impact on the world at large. In 'Quarks, diquarks and pentaquarks', Robert Jaffe and Frank Wilczek describe our current understanding of quantum chromodynamics and the strong nuclear force. In this case it is hard to think of many more difficult problems in fundamental physics. Traditional climate modelling is difficult enough because a whole range of effects in the atmosphere and the oceans have to be taken into account. It typically takes weeks for a state-of-the-art supercomputer to simulate 100 years of climate change with a horizontal resolution of 100 km. But climate change is about much more than solving difficult differential equations - there are crucial social, political and economic influences as well. Some researchers, including a significant number of physicists, have started to look at this integrated-assessment approach. The first challenge is to develop climate models that take minutes to run on a laptop. The next challenge is to develop analogous models that work in the social, political and economic arenas - which is not a trivial task - and then integrate all these different models and explore all the possible global-warming scenarios. Physicists also hope to integrate quantum chromodynamics (QCD) into the larger framework of a so-called theory of everything. Like climate modellers, particle theorists working on QCD require enormous computational resources for their calculations, and even then there are limits to what can be achieved (e.g. the mass of the proton has yet to be calculated from first principles
无
2003-01-01
This paper puts forward a new conception:model warehouse,analyzes the reason why model warehouse appears and introduces the characteristics and architecture of model warehouse.Last,this paper points out that model warehouse is an important part of WebGIS.
Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina;
2011-01-01
This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also...... covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...
Katerina Simons
1997-01-01
Modern finance would not have been possible without models. Increasingly complex quantitative models drive financial innovation and the growth of derivatives markets. Models are necessary to value financial instruments and to measure the risks of individual positions and portfolios. Yet when used inappropriately, the models themselves can become an important source of risk. Recently, several well-publicized instances occurred of institutions suffering significant losses attributed to model er...
M Batty
2007-01-01
The term ?model? is now central to our thinking about how weunderstand and design cities. We suggest a variety of ways inwhich we use ?models?, linking these ideas to Abercrombie?sexposition of Town and Country Planning which represented thestate of the art fifty years ago. Here we focus on using models asphysical representations of the city, tracing the development ofsymbolic models where the focus is on simulating how functiongenerates form, to iconic models where the focus is on representi...
Yost, S.A.
1991-05-01
Radom matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described.
Larsen, Lars Bjørn; Vesterager, Johan
This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise....... One or more units from beyond the network may complement the extended enterprise. The common reference model for this extended enterprise will utilise GERAM (Generalised Enterprise Reference Architecture and Methodology) to provide an architectural framework for the modelling carried out within the...
Contributions to the workshop 'Geochemical modeling' from 19 to 20 September 1990 at the Karlsruhe Nuclear Research Centre. The report contains the programme and a selection of the lectures held at the workshop 'Geochemical modeling'. (BBR)
This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...
Fox, Mark S.; Gruninger, Michael
1998-01-01
To remain competitive, enterprises must become increasingly agile and integrated across their functions. Enterprise models play a critical role in this integration, enabling better designs for enterprises, analysis of their performance, and management of their operations. This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual ent...
Jongerden, M.R.; Haverkort, B.R.
2008-01-01
The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However, with these models one can only compute lifetimes for specific discharge profiles, and not for workloads in general. In this paper, we give an overview of the different battery models that are availabl...
Turner, Raymond
2009-01-01
Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al
Frampton, Paul H.
1997-01-01
In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly...
In this work the most recent magnetospheric models are reviewed. After a short overview of the particle environment, a synthetic survey of the problem is given. For each feature of magnetospheric modelling (boundary, current sheet, ring-current) the approaches used by different authors are described. In the second part a description is given of the magnetospheric models, divided into four groups. In the last part, the different uses of magnetospheric models are illustrated by means of examples
Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...
Sclütter, Flemming; Frigaard, Peter; Liu, Zhou
This report presents the model test results on wave run-up on the Zeebrugge breakwater under the simulated prototype storms. The model test was performed in January 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University. The detailed description of the model is given in...
Ravn, Anders P.; Staunstrup, Jørgen
1994-01-01
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
Hydrological models are mediating models
Babel, L. V.; Karssenberg, D.
2013-08-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting
Selén, Yngve
2004-01-01
Before using a parametric model one has to be sure that it offers a reasonable description of the system to be modeled. If a bad model structure is employed, the obtained model will also be bad, no matter how good is the parameter estimation method. There exist many possible ways of validating candidate models. This thesis focuses on one of the most common ways, i.e., the use of information criteria. First, some common information criteria are presented, and in the later chapters, various ext...
Stubkjær, Erik
2005-01-01
Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares...... to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...... to land. The paper advances the position that cadastral modeling has to include not only the physical objects, agents, and information sets of the domain, but also the objectives or requirements of cadastral systems....
Bois, Frederic Y; Brochot, Céline
2016-01-01
Pharmacokinetics is the study of the fate of xenobiotics in a living organism. Physiologically based pharmacokinetic (PBPK) models provide realistic descriptions of xenobiotics' absorption, distribution, metabolism, and excretion processes. They model the body as a set of homogeneous compartments representing organs, and their parameters refer to anatomical, physiological, biochemical, and physicochemical entities. They offer a quantitative mechanistic framework to understand and simulate the time-course of the concentration of a substance in various organs and body fluids. These models are well suited for performing extrapolations inherent to toxicology and pharmacology (e.g., between species or doses) and for integrating data obtained from various sources (e.g., in vitro or in vivo experiments, structure-activity models). In this chapter, we describe the practical development and basic use of a PBPK model from model building to model simulations, through implementation with an easily accessible free software. PMID:27311461
This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs
Model choice versus model criticism
Robert, Christian P.; Mengersen, Kerrie; Chen, Carla
2009-01-01
The new perspectives on ABC and Bayesian model criticisms presented in Ratmann et al.(2009) are challenging standard approaches to Bayesian model choice. We discuss here some issues arising from the authors' approach, including prior influence, model assessment and criticism, and the meaning of error in ABC.
Garland, M A; Talbert, R J; Mashnik, S G; Wilson, W B
1999-01-01
Through decades of effort in nuclear data development and simulations of reactor neutronics and accelerator transmutation, a collection of reaction data is continuing to evolve with the potential of direct applications to the production of medical isotopes. At Los Alamos the CINDER'90 code and library have been developed for nuclide inventory calculations using neutron-reaction (En < 20 MeV) and/or decay data for 3400 nuclides; coupled with the LAHET Code System (LCS), irradiations in neutron and proton environments below a few GeV are tractable; additional work with the European Activation File, the HMS-ALICE code and the reaction models of MCNPX (CEM95, BERTINI, or ISABEL with or without preequilibrium, evaporation and fission) have been used to produce evaluated reaction data for neutrons and protons to 1.7 GeV. At the Pacific Northwest National Laboratory, efforts have focused on production of medical isotopes and the identification of available neutron reaction data from results of integral measuremen...
H. Yang
1999-11-04
The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future.
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...... temporarily from bookcases to borrowers. When we characterize events as change agents we focus on concepts like transactions, entity processes, and workflow processes....
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...... temporarily from bookcases to borrowers. When we characterize events as change agents we focus on concepts like transactions, entity processes, and workflow processes....
This paper is an introduction course in modelling turbulent thermohydraulics, aimed at computational fluid dynamics users. No specific knowledge other than the Navier Stokes equations is required beforehand. Chapter I (which those who are not beginners can skip) provides basic ideas on turbulence physics and is taken up in a textbook prepared by the teaching team of the ENPC (Benque, Viollet). Chapter II describes turbulent viscosity type modelling and the 2k-ε two equations model. It provides details of the channel flow case and the boundary conditions. Chapter III describes the 'standard' (Rij-ε) Reynolds tensions transport model and introduces more recent models called 'feasible'. A second paper deals with heat transfer and the effects of gravity, and returns to the Reynolds stress transport model. (author)
Braby, L.A.
1990-09-01
The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. A range of models covering different endpoints and phenomena has developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. 43 refs., 13 figs.
Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens
2011-01-01
In this report a new turbulence model is presented.In contrast to the bulk of modern work, the model is a classical continuum model with a relatively simple constitutive equation. The constitutive equation is, as usual in continuum mechanics, entirely empirical. It has the usual Newton or Stokes...... term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence....... The model is in a virgin state, but a number of numerical tests have been carried out with good results. It is published to encourage other researchers to study the model in order to find its merits and possible limitations....
Blomhøj, Morten
2004-01-01
Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...
2016-01-01
This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.
Sochůrková, Adéla
2012-01-01
The aim of this thesis is the compilation of an inventory management methods, describe their principles and assess the appropriateness of their use. In the introductory part of the work, "The nature and importance of inventory management" are briefly described the inventory management, the main objectives of inventory control models, the basic division of inventory species and costs of supply. The following chapter "Overview of inventory control models" includes a breakdown of models from dif...
Epstein, Joshua M.
2008-01-01
This address treats some enduring misconceptions about modeling. One of these is that the goal is always prediction. The lecture distinguishes between explanation and prediction as modeling goals, and offers sixteen reasons other than prediction to build a model. It also challenges the common assumption that scientific theories arise from and 'summarize' data, when often, theories precede and guide data collection; without theory, in other words, it is not clear what data to collect. Among ot...
Marco Antonio Moreira
1996-12-01
Full Text Available The mental models subject is presented particularly in the light of Johnson-Laird’s theory. Views from different authors are also presented but the emphasis lies in Johson-Laird’s approach, proposing mental models as a third path in the images x propositions debate. In this perspective, the nature, content, and typology of mental models are discussed, as well as the issue of conciousness and computability. In addition, the methodology of research studies are provided. Essentially, the aim of the paper is to provide an introduction to the mental models topic, having science education research in mind.
Liu, Zhou; Frigaard, Peter
This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University.......This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University....
Vestergaard, Kristian
engineers, but as the scale and the complexity of the hydraulic works increased, the mathematical models became so complex that a mathematical solution could not be obtained. This created a demand for new methods and again the experimental investigation became popular, but this time as measurements on small......-scale models. But still the scale and complexity of hydraulic works were increasing, and soon even small-scale models reached a natural limit for some applications. In the mean time the modern computer was developed, and it became possible to solve complex mathematical models by use of computer-based numerical...
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post
Modeling Documents with Event Model
Longhui Wang
2015-08-01
Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
Poortman, Sybilla; Sloep, Peter
2006-01-01
Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in teachers’ motivation for using and sharing learning objects. This document is aimed at teachers and educational designers.
Højgaard, Tomas; Hansen, Rune
2016-01-01
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to...
Jantzen, Jan
1998-01-01
A neural network can approximate a function, but it is impossible to interpret the result in terms of natural language. The fusion of neural networks and fuzzy logic in neurofuzzy models provide learning as well as readability. Control engineers find this useful, because the models can be...
Giandomenico, Rossano
2006-01-01
The model determines a stochastic continuous process as continuous limit of a stochastic discrete process so to show that the stochastic continuous process converges to the stochastic discrete process such that we can integrate it. Furthermore, the model determines the expected volatility and the expected mean so to show that the volatility and the mean are increasing function of the time.
Løssing, Ulrik
1986-01-01
Ulrik Løssing har redigeret, illustreret og oversat: "Scribe Modeller System, Sheffield, november 1985" af forfatterne: Cedric Green, David Cooper og John Wells.......Ulrik Løssing har redigeret, illustreret og oversat: "Scribe Modeller System, Sheffield, november 1985" af forfatterne: Cedric Green, David Cooper og John Wells....
Gøtze, Jens Peter; Krentz, Andrew
2014-01-01
In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...
Kindler, Ekkart
2009-01-01
There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts, these...... notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add...
Building Models and Building Modelling
Jørgensen, Kaj Asbjørn; Skauge, Jørn
teoretiske basis for de kapitler, der har et mere teoretisk indhold. De følgende appendikser B-D indeholder nærmere karakteristika om de to modellerings CAD-programmer ArchiCAD og Architectural Desktop tillige med en sammenligning mellem de to værktøjer. I de resterende to appendikser beskrives de specielle...... problemstillinger vedrørende modellering af de to "Sorthøjparken"-modeller og de resulterende modeller bliver præsenteret og evalueret. Den samlede rapport er udgivet på projektets hjemmeside: www.iprod.aau.dk/bygit/Web3B/ under Technical Reports....
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to
Kreiner, Svend; Christensen, Karl Bang
Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...
Grimaldi, P.
2012-07-01
These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Insepov, Zeke; Veitzer, Seth; Mahalingam, Sudhakar
2011-01-01
Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gra- dient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.
National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of...
The author's goal is to provide a physical understanding of the ideal MHD model which includes: (1) a basic description of the model, (2) a derivation starting from a more fundamental kinetic model, and (3) a discussion of its range of validity. The ideal MHD model is a single-fluid model that describes the effects of magnetic geometry on the macroscopic equilibrium and stability properties of fusion plasmas. The model is derived in a straight forward manner by forming the mass, momentum, and energy moments of the Boltzmann equation. The moment equations reduce to ideal MHD with the introduction of three critical assumptions: high collisionality, small ion gyro radius, and small resistivity. An analysis of the validity conditions shows that the collision-dominated assumption is never satisfied in plasmas of fusion interest. The remaining two conditions are satisfied by a wide margin. A careful examination of the collision-dominated assumption shows that those particular parts of ideal MHD treated inaccurately (i.e., the parallel momentum and energy equations), play little, if any practical role in MHD equilibrium and stability. These equations primarily describe compression and expansion of a plasma whereas most MHD instabilities involve incompressible motions. The model is incorrect only where it does not matter. This realization leads to the introduction of a modified MHD model known as collisionless MHD which makes predictions nearly identical to collision-dominated assumption. It is thus valid for plasmas of fusion interest. The derivation follows from an analysis of single-particle guiding center motion in a collisionless plasma and the subsequent closure of the system by the heuristic assumption that the motions of interest are incompressible
Accelerated life models modeling and statistical analysis
Bagdonavicius, Vilijandas
2001-01-01
Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia
Kocherlakota, Narayana R.
2007-01-01
This paper uses an example to show that a model that fits the available data perfectly may provide worse answers to policy questions than an alternative, imperfectly fitting model. The author argues that, in the context of Bayesian estimation, this result can be interpreted as being due to the use of an inappropriate prior over the parameters of shock processes. He urges the use of priors that are obtained from explicit auxiliary information, not from the desire to obtain identification.
Model composition in model checking
Felscher, Ingo
2014-01-01
Model-checking allows one to formally check properties of systems: these properties are modeled as logic formulas and the systems as structures like transition systems. These transition systems are often composed, i.e., they arise in form of products or sums. The composition technique allows us to deduce the truth of a formula in the composed system from "interface information": the truth of formulas for the component systems and information in which components which of these formulas hold. W...
The invention relates to devices for modelling the space-dependent kinetics of a nuclear reactor. It can be advantageously used in studying the dynamics of the neutron field in the core to determine the effect of the control rods on the power distribution in the core, for training purposes. The proposed analog model of a nuclear reactor comprises operational amplifiers and a grid of resistors simulating neutron diffusion. Connected to the grid nodes are supply resistors modelling absorption and multiplication of neutrons. This is achieved by that, in the proposed model, all resistors through which power is supplied to the grid nodes are interconnected by their other leads and coupled to the output of the amplifier unit common for all nodes. Therewith, the amlifier unit models the transfer function of a ''point'' reactor. Connected to the input of this unit which includes two to four amplifiers are resistors for addition of signals with a grid node. Coupled to the grid nodes via additional resistors are voltage sources simulating reactivity
Tel, E.; Durgu, C.; Aktı, N. N.; Okuducu, Ş.
2010-06-01
Fusion serves an inexhaustible energy for humankind. Although there have been significant research and development studies on the inertial and magnetic fusion reactor technology, there is still a long way to go to penetrate commercial fusion reactors to the energy market. Tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. So, the working out the systematics of ( n, t) reaction cross sections is of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at different energies. In this study, ( n, t) reactions for some structural fusion materials such as 27Al, 51V, 52Cr, 55Mn, and 56Fe have been investigated. The new calculations on the excitation functions of 27Al( n, t)25Mg, 51V( n, t)49Ti, 52Cr( n, t)50V, 55Mn( n, t)53Cr and 56Fe( n, t)54Mn reactions have been carried out up to 50 MeV incident neutron energy. In these calculations, the pre-equilibrium and equilibrium effects have been investigated. The pre-equilibrium calculations involve the new evaluated the geometry dependent hybrid model, hybrid model and the cascade exciton model. Equilibrium effects are calculated according to the Weisskopf-Ewing model. Also in the present work, we have calculated ( n, t) reaction cross-sections by using new evaluated semi-empirical formulas developed by Tel et al. at 14-15 MeV energy. The calculated results are discussed and compared with the experimental data taken from the literature.
Nucleon-induced fission cross sections of heavy nuclei in the intermediate energy region
Fission is the most important nuclear reaction for society at large today due to its use in energy production. However, this has raised the problem of how to treat the long-lived radioactive waste from nuclear reactors. A radical solution would be to change the composition of the waste into stable or short-lived nuclides, which could be done through nuclear transmutation. Such a concept requires accelerator-driven systems to be designed, where those for transmutation are reactor hybrids. This thesis is a contribution to the knowledge base for developing transmutation systems, specifically with respect to the computational modeling of the underlying nuclear reactions, induced by the incident and secondary particles. Intermediate energy fission cross sections are one important type of such data. Moreover, they are essential for understanding the fission process itself and related nuclear interactions. The experimental part of this work was performed at the neutron beam facility of the The Svedberg Laboratory in Uppsala. Fission cross sections of 238U, 209Bi, Pb, 208Pb, 197Au, W, and 181Ta were measured for neutrons in the range En, = 30-160 MeV using thin-film breakdown counters for the fission fragment detection. A model was developed for the determination of the efficiency of such detectors. A compilation of existing data on proton-induced fission cross sections for nuclei from 165Ho to 239Pu was performed. The results, which constitute the main body of information in this field, were added to the worldwide EXFOR database. The dependences of the cross sections on incident energy and target nucleus were studied, which resulted in systematics that make it possible to give estimates for unmeasured nuclides. Nucleon-induced fission cross sections were calculated using an extended version of the cascade exciton model. A comparison with the systematics and the experimental data obtained in the present work revealed significant discrepancies. A modification of the model
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of...... probabilistic functionalism, and concerns the environment and the mind, and adaptation by the latter to the former. This entry is about the lens model, and probabilistic functionalism more broadly. Focus will mostly be on firms and their employees, but, to fully appreciate the scope, we have to keep in mind the...
2012-01-01
The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....
Bork Petersen, Franziska
2013-01-01
focus centres on how the catwalk scenography evokes a ‘defiguration’ of the walking models and to what effect. Vibskov’s mobile catwalk draws attention to the walk, which is a key element of models’ performance but which usually functions in fashion shows merely to present clothes in the most...... advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary...... determines the models’ walk. Furthermore, letting the models set off sound through triggers with attached sound samples gives them an implied agency. This calls into question the designer’s unrestricted authorship....
Ling Li; Vasily Volkov
2006-01-01
A physically-based model is presented for the simulation of a new type of deformable objects-inflatable objects, such as shaped balloons, which consist of pressurized air enclosed by an elastic surface. These objects have properties inherent in both 3D and 2D elastic bodies, as they demonstrate the behaviour of 3D shapes using 2D formulations. As there is no internal structure in them, their behaviour is substantially different from the behaviour of deformable solid objects. We use one of the few available models for deformable surfaces, and enhance it to include the forces of internal and external pressure. These pressure forces may also incorporate buoyancy forces, to allow objects filled with a low density gas to float in denser media. The obtained models demonstrate rich dynamic behaviour, such as bouncing, floating, deflation and inflation.
Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the 56Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed
Aarti Sharma
2009-01-01
Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.
Arnoldi, Jakob
The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing on...... two cases, this article shows that manipulation more likely happens in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ or ‘mislead’ algos. Thus, it is algorithmic models, not humans, that are manipulated. Such manipulation poses challenges for security exchanges...
Holmes, Jon L.
1999-06-01
Molecular modeling has trickled down from the realm of pharmaceutical and research laboratories into the realm of undergraduate chemistry instruction. It has opened avenues for the visualization of chemical concepts that previously were difficult or impossible to convey. I am sure that many of you have developed exercises using the various molecular modeling tools. It is the desire of this Journal to become an avenue for you to share these exercises among your colleagues. It is to this end that Ron Starkey has agreed to edit such a column and to publish not only the description of such exercises, but also the software documents they use. The WWW is the obvious medium to distribute this combination and so accepted submissions will appear online as a feature of JCE Internet. Typical molecular modeling exercise: finding conformation energies. Molecular Modeling Exercises and Experiments is the latest feature column of JCE Internet, joining Conceptual Questions and Challenge Problems, Hal's Picks, and Mathcad in the Chemistry Curriculum. JCE Internet continues to seek submissions in these areas of interest and submissions of general interest. If you have developed materials and would like to submit them, please see our Guide to Submissions for more information. The Chemical Education Resource Shelf, Equipment Buyers Guide, and WWW Site Review would also like to hear about chemistry textbooks and software, equipment, and WWW sites, respectively. Please consult JCE Internet Features to learn more about these resources at JCE Online. Email Announcements Would you like to be informed by email when the latest issue of the Journal is available online? when a new JCE Software title is shipping? when a new JCE Internet article has been published or is available for Open Review? when your subscription is about to expire? A new feature of JCE Online makes this possible. Visit our Guestbook to learn how. When you submit the form on this page, which includes your email address
Cardey, Sylviane
2013-01-01
In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int
Sivaram, C.
2007-01-01
An alternate model for gamma ray bursts is suggested. For a white dwarf (WD) and neutron star (NS) very close binary system, the WD (close to Mch) can detonate due to tidal heating, leading to a SN. Material falling on to the NS at relativistic velocities can cause its collapse to a magnetar or quark star or black hole leading to a GRB. As the material smashes on to the NS, it is dubbed the Smashnova model. Here the SN is followed by a GRB. NS impacting a RG (or RSG) (like in Thorne-Zytkow ob...
Calculations, drawing principally on developments at AERE Harwell, of the relaxation about lattice defects are reviewed with emphasis on the techniques required for such calculations. The principles of defect modelling are outlined and various programs developed for defect simulations are discussed. Particular calculations for metals, ionic crystals and oxides, are considered. (UK)
Michael, John
others' minds. Then (2), in order to bring to light some possible justifications, as well as hazards and criticisms of the methodology of looking time tests, I will take a closer look at the concept of folk psychology and will focus on the idea that folk psychology involves using oneself as a model of...
Olaf eWolkenhauer
2014-01-01
Full Text Available Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question Why model?
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Taylor, Julie
2013-01-01
This paper provides a brief overview of the NSPCC/University of Edinburgh Child Protection Research Centre. It highlights the Centre's work, approach, progress to date and direction of travel. The document includes the Centre's Logic Model which details types of research and outcomes.
R.E. Waltz
2007-01-01
@@ There has been remarkable progress during the past decade in understanding and modeling turbulent transport in tokamaks. With some exceptions the progress is derived from the huge increases in computational power and the ability to simulate tokamak turbulence with ever more fundamental and physically realistic dynamical equations, e.g.
Burianová, Eva
2008-01-01
Cílem první části této bakalářské práce je - pomocí analýzy výchozích textů - teoretické shrnutí ekonomických modelů a teorií, na kterých model CAPM stojí: Markowitzův model teorie portfolia (analýza maximalizace očekávaného užitku a na něm založený model výběru optimálního portfolia), Tobina (rozšíření Markowitzova modelu ? rozdělení výběru optimálního portfolia do dvou fází; nejprve určení optimální kombinace rizikových instrumentů a následná alokace dostupného kapitálu mezi tuto optimální ...
Jensen, Morten S.; Frigaard, Peter
In the following, results from model tests with Zeebrugge breakwater are presented. The objective with these tests is partly to investigate the influence on wave run-up due to a changing waterlevel during a storm. Finally, the influence on wave run-up due to an introduced longshore current is...
N. Bosma (Niels); G. de Wit (Gerrit); M.A. Carree (Martin)
2003-01-01
textabstractTwo approaches can be distinguished with respect to modelling entrepreneurship: (i) the approach focusing on the net development of the number of entrepreneurs in an equilibrium framework and (ii) the approach focusing on the entries and exits of entrepreneurs. In this paper we unify the
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of
There are several new technological application fields of fast neutrons such as accelerator-driven incineration/ transmutation of the long-lived radioactive nuclear wastes (in particular transuranium nuclides) to short-lived or stable isotopes by secondary spallation neutrons produced by high-intensity, intermediate-energy, charged-particle beams, prolonged planetary space missions, shielding for particle accelerators. Especially, accelerator driven subcritical systems (ADS) can be used for fission energy production and /or nuclear waste transmutation as well as in the intermediate-energy accelerator driven neutron sources, ions and neutrons with energies beyond 20 MeV, the upper limit of exiting data files that produced for fusion and fission applications. In these systems, the neutron scattering cross sections and emission differential data are very important for reactor neutronics calculations. The transition rate calculation involves the introduction of the parameter of mean free path determines the mean free path of the nucleon in the nuclear matter. This parameter allows an increase in mean free path, with simulation of effect, which is not considered in the calculations, such as conservation of parity and angular momentum in intra nuclear transitions. In this study, we have investigated the multiple preequilibrium matrix element constant from internal transition for Uranium, Thorium, (n,xn) neutron emission spectra. The neutron-emission spectra produced by (n,xn) reactions on nuclei of some target (for spallation) have been calculated. In the calculations, we have used the geometry dependent hybrid model and the cascade exciton model including the effects of the preequilibrium. The pre-equilibrium direct effects have been examined by using full exciton model. All calculated results have been compared with the experimental data. The obtained results have been discussed and compared with the available experimental data and found agreement with each other
Information Model for Product Modeling
焦国方; 刘慎权
1992-01-01
The Key problems in product modeling for integrated CAD ∥CAM systems are the information structures and representations of products.They are taking more and more important roles in engineering applications.With the investigation on engineering product information and from the viewpoint of industrial process,in this paper,the information models are proposed and the definitions of the framework of product information are given.And then,the integration and the consistence of product information are discussed by introucing the entity and its instance.As a summary,the information structures described in this paper have many advantage and natures helpful in engineering design.
Aarti Sharma
2009-12-01
Full Text Available
Almeida, Leandro S.; José Fernando A. Cruz; Ferreira, Helena Isabel dos Santos Ribeiro; Pinto, Alberto
2011-01-01
The Theory of Planned Behavior studies the decision-making mechanisms of individuals. We propose the Nash Equilibria as one, of many, possible mechanisms of transforming human intentions in behavior. This process corresponds to the best strategic individual decision taking in account the collective response. We built a game theoretical model to understand the role of leaders in decision-making of individuals or groups. We study the characteristics of the leaders that can have a...
Clyde, Merlise; George, Edward I.
2004-01-01
The evolution of Bayesian approaches for model uncertainty over the past decade has been remarkable. Catalyzed by advances in methods and technology for posterior computation, the scope of these methods has widened substantially. Major thrusts of these developments have included new methods for semiautomatic prior specification and posterior exploration. To illustrate key aspects of this evolution, the highlights of some of these developments are described.
This lecture was given at the KEK Summer School on August 3-6, 1993 by Professor N. Sakai. All the available experimental data at low energy can be adequately described by the standard model with SU(3) x SU(2) x U(1) gauge group. The three different gauge coupling constants originate from the three different interactions, namely, strong, weak and electromagnetic interactions. The three interactions described by the three different gauge groups can be truly unified into a single gauge group if a simple gauge group to describe all three interactions is chosen. Even if the grand unified theory is not accepted, the existence of gravitational interaction is sure. There are only two options to explain the gauge hierarchy, that is, technicolor model and supersymmetry. As the introduction to supersymmetry, Spinors and Grassmann number, Supertransformation, unitary representation, chiral scalar superfield and supersymmetric Lagrangian field theory are explained. Regarding the supersymmetric SU(3) x SU(2) x U(1) model, Yukawa coupling and particle content are described. It should be noted that the Higgsino (chiral fermions associated with Higgs scalar) in general introduces anomaly in gauge currents. The simplest way out of such anomaly problem is to introduce Higgsino doublet in pair. (K.I.)
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO2), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NOx concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NOx coordinates of the point, known as the NMOC/NOx ratio. Results obtained by the described model are presented
Technological Forecasting---Model Selection, Model Stability, and Combining Models
Nigel Meade; Towhidul Islam
1998-01-01
The paper identifies 29 models that the literature suggests are appropriate for technological forecasting. These models are divided into three classes according to the timing of the point of inflexion in the innovation or substitution process. Faced with a given data set and such a choice, the issue of model selection needs to be addressed. Evidence used to aid model selection is drawn from measures of model fit and model stability. An analysis of the forecasting performance of these models u...
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Unnikrishnan, A.S.; Manoj, N.T.
the wetted perimeter and A the area of cross section (excluding mud flats); C = (1.49/n)R1/6, where n is the Manning coefficient. The numerical scheme used by Harleman and Lee (1969) was used to solve the above equations. In this scheme, the continuity... equation is solved at odd grid points to compute eta at the next time step and the momentum equation is solved at even grid points to compute U . The original scheme of Harleman & Lee (1969) was developed for a single channel. For developing a model...
Liu Zhiyang
2011-01-01
Similar to ISO Technical Committees,SAC Technical Committees undertake the management and coordination of standard's development and amendments in various sectors in industry,playing the role as a bridge among enterprises,research institutions and the governmental standardization administration.How to fully play the essential role is the vital issue SAC has been committing to resolve.Among hundreds of SAC TCs,one stands out in knitting together those isolated,scattered,but highly competitive enterprises in the same industry with the "Standards" thread,and achieving remarkable results in promoting industry development with standardization.It sets a role model for other TCs.
From model checking to model measuring
Henzinger, Thomas A.; Otop, Jan
2013-01-01
We define the model-measuring problem: given a model $M$ and specification~$\\varphi$, what is the maximal distance $\\rho$ such that all models $M'$ within distance $\\rho$ from $M$ satisfy (or violate)~$\\varphi$. The model measuring problem presupposes a distance function on models. We concentrate on automatic distance functions, which are defined by weighted automata. The model-measuring problem subsumes several generalizations of the classical model-checking problem, in particular, qu...
ModelWizard: Toward Interactive Model Construction
Hutchison, Dylan
2016-01-01
Data scientists engage in model construction to discover machine learning models that well explain a dataset, in terms of predictiveness, understandability and generalization across domains. Questions such as "what if we model common cause Z" and "what if Y's dependence on X reverses" inspire many candidate models to consider and compare, yet current tools emphasize constructing a final model all at once. To more naturally reflect exploration when debating numerous models, we propose an inter...
Towards a Multi Business Model Innovation Model
Lindgren, Peter; Jørgensen, Rasmus
2012-01-01
This paper studies the evolution of business model (BM) innovations related to a multi business model framework. The paper tries to answer the research questions: • What are the requirements for a multi business model innovation model (BMIM)? • How should a multi business model innovation model...... look like? Different generations of BMIMs are initially studied in the context of laying the baseline for how next generation multi BM Innovation model (BMIM) should look like. All generations of models are analyzed with the purpose of comparing the characteristics and challenges of previous...
Better Language Models with Model Merging
Brants, T
1996-01-01
This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.
Impedance model for nanostructures
R. S. Akhmedov
2007-06-01
Full Text Available The application of the impedance model for nanoelectronic quantum-mechanical structures modelling is described. Characteristics illustrating the efficiency of the model are presented.
Building Mental Models by Dissecting Physical Models
Srivastava, Anveshna
2016-01-01
When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…
From Product Models to Product State Models
Larsen, Michael Holm
1999-01-01
A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...
The IMACLIM model; Le modele IMACLIM
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
Rask, Morten
insight from the literature about business models, international product policy, international entry modes and globalization into a conceptual model of relevant design elements of global business models, enabling global business model innovation to deal with differences in a downstream perspective...
Forward model nonlinearity versus inverse model nonlinearity
Mehl, S.
2007-01-01
The issue of concern is the impact of forward model nonlinearity on the nonlinearity of the inverse model. The question posed is, "Does increased nonlinearity in the head solution (forward model) always result in increased nonlinearity in the inverse solution (estimation of hydraulic conductivity)?" It is shown that the two nonlinearities are separate, and it is not universally true that increased forward model nonlinearity increases inverse model nonlinearity. ?? 2007 National Ground Water Association.
Combemale, Benoit; Cheng, Betty H.C.; Moreira, Ana; Bruel, Jean-Michel; Gray, Jeff
2015-01-01
Various disciplines use models for different purposes. An engineering model, including a software engineering model, is often developed to guide the construction of a non-existent system. A scientific model is created to better understand a natural phenomenon (i.e., an already existing system). An engineering model may incorporate scientific models to build a system. Sustainability is an area that requires both types of models. Both engineering and scientific models have been used to support ...
A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.)
Concept Modeling vs. Data modeling in Practice
Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
2015-01-01
This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application of the l......This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...
Optimal predictive model selection
Barbieri, Maria Maddalena; Berger, James O.
2004-01-01
Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined a...
Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher
2014-01-01
The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...
Wake modelling combining mesoscale and microscale models
Badger, Jake; Volker, Patrick; Prospathospoulos, J.; Sieros, G.; Ott, Søren; Réthoré, Pierre-Elouan; Hahmann, Andrea N.; Hasager, Charlotte Bay
2013-01-01
In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake...
Model Manipulation for End-User Modelers
Acretoaie, Vlad
, and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...
Model Checking of Boolean Process Models
Schneider, Christoph; Wehler, Joachim
2011-01-01
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to mode...
MODEL VALIDATION AND THE PHILIPPINE PROGRAMMING MODEL
Rodriguez, Gil R. Jr.; Kunkel, David E.
1980-01-01
This research demonstrates the need and the procedure for testing sector programming models It compares the model estimates of endogenous variables to carefully selected base period parameters It uses an operational, static, deterministic, and highly aggregate programming model of Philippine agriculture as the framework Alternative formulations of the Philippine model are also examined for possible errors In the consumption, production, and objective function data sets
Molecular Models: Construction of Models with Magnets
Kalinovčić P.
2015-01-01
Molecular models are indispensable tools in teaching chemistry. Beside their high price, commercially available models are generally too small for classroom demonstration. This paper suggests how to make space-filling (callote) models from Styrofoam with magnetic balls as connectors and disc magnets for showing molecular polarity
Tahir Abdullah
2012-02-01
Full Text Available Software architecture design and requirement engineering are core and independent areas of engineering. A lot of research, education and practice are carried on Requirement elicitation and doing refine it, but it is a major issue of engineering. QSMSR model act as a bridge between requirement and design there is a huge gap between these two areas of software architecture and requirement engineering. The QSMSR model divide into two sub model qualitative model and Principal model in this research we focus on Qualitative model which further divide into two sub models fabricated model and classified model. Classified model make the sub groups of the role and match it with components. The Fabricated model link QSMSR Principal Model to an architecture design. At the end it provides the QSMSR Architecture model of the system as output.
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
Könemann, Patrick
just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models it was...
Automated data model evaluation
Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation
Modelling Foundations and Applications
selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...
Environmental Satellite Models for a Macroeconomic Model
To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries
Geologic Framework Model Analysis Model Report
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Geologic Framework Model Analysis Model Report
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and
Collaborative networks: Reference modeling
L.M. Camarinha-Matos; H. Afsarmanesh
2008-01-01
Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of
Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire...
LSTM based Conversation Models
Luan, Yi; Ji, Yangfeng; Ostendorf, Mari
2016-01-01
In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations. Different architectures are explored for integrating participant role and context information into a Long Short-term Memory (LSTM) language model. The conversational model can function as a language model or a language generation model. Experiments on the Ubuntu Dialog Corpus show that our model can capture multiple turn interaction between participants. The propos...
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...
Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...
Combustion modeling in a model combustor
L.Y.Jiang; I.Campbell; K.Su
2007-01-01
The flow-field of a propane-air diffusion flame combustor with interior and exterior conjugate heat transfers was numerically studied.Results obtained from four combustion models,combined with the re-normalization group (RNG) k-ε turbulence model,discrete ordinates radiation model and enhanced wall treatment are presented and discussed.The results are compared with a comprehensive database obtained from a series of experimental measurements.The flow patterns and the recirculation zone length in the combustion chamber are accurately predicted,and the mean axial velocities are in fairly good agreement with the experimental data,particularly at downstream sections for all four combustion models.The mean temperature profiles are captured fairly well by the eddy dissipation (EDS),probability density function (PDF),and laminar flamelet combustion models.However,the EDS-finite-rate combustion model fails to provide an acceptable temperature field.In general,the flamelet model illustrates little superiority over the PDF model,and to some extent the PDF model shows better performance than the EDS model.
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3
Business value modeling based on BPMN models
Masoumigoudarzi, Farahnaz
2014-01-01
In this study we will try to clarify the explanation of modeling and measuring 'Business Values', as it is defined in business context, in the business processes of a company and introduce different methods and select the one which is best for modeling the company's business values. These methods have been used by researchers in business analytics and senior managers of many companies. The focus in this project is business value detection and modeling. The basis of this research is on BPM...
A future of the model organism model
Rine, Jasper
2014-01-01
Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. Howe...
Failure prediction model: Model napovedovanja odpovedi:
Čelan, Štefan; Težak, Oto; Žižek, Adolf
2002-01-01
Preventative maintenance is vital for delicate technical products. Electronic components or the whole system must be changed, and thus need a good model that will indicate failure accurately. In this paper a stochastic stress-strength quantitative model is presented, folowing the five original hypothesis. Proposed new model of failure prediction could be used by the system maintenance. Failure risk could be instantaneosly calculated. The given theory considers the influences of stress on the ...
Better models are more effectively connected models
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
MacKay, N. J.
2006-01-01
An overview of Lanchester combat models, emphasising their pedagogical possibilities. After a description of the aimed-fire model and comments on the literature, we introduce briefly a range of further topics: a discrete equivalent, the unaimed-fire model, mixed forces, the meaning of a 'unit', support troops, Bracken's generalization and an asymmetric model.
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Croft, Barbara Y.
2002-01-01
Animal models can be used in the study of disease. This chapter discusses imaging animal models to elucidate the process of human disease. The mouse is used as the primary model. Though this choice simplifies many research choices, it necessitates compromises for in vivo imaging. In the future, we can expect improvements in both animal models and imaging techniques.
Andrist, Rafael B.; Haworth, Guy McCrossan
2005-01-01
A reference model of Fallible Endgame Play has been implemented and exercised with the chess-engine WILHELM. Past experiments have demonstrated the value of the model and the robustness of decisions based on it: experiments agree well with a Markov Model theory. Here, the reference model is exercised on the well-known endgame KBBKN.
Generative Models of Disfluency
Miller, Timothy A.
2010-01-01
This thesis describes a generative model for representing disfluent phenomena in human speech. This model makes use of observed syntactic structure present in disfluent speech, and uses a right-corner transform on syntax trees to model this structure in a very natural way. Specifically, the phenomenon of speech repair is modeled by explicitly…
Modelling Railway Interlocking Systems
Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth
2000-01-01
In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...
On Multiobjective Evolution Model
Ahmed, E; Elettreby, M. F.
2004-01-01
Self-Organized Criticality (SOC) phenomena could have a significant effect on the dynamics of ecosystems. The Bak-Sneppen (BS) model is a simple and robust model of biological evolution that exhibits punctuated equilibrium behavior. Here we will introduce random version of BS model. Also we generalize the single objective BS model to a multiobjective one.
On Multiobjective Evolution Model
Ahmed, E.; Elettreby, M. F.
Self-Organized Criticality (SOC) phenomena could have a significant effect on the dynamics of ecosystems. The Bak-Sneppen (BS) model is a simple and robust model of biological evolution that exhibits punctuated equilibrium behavior. Here, we will introduce random version of BS model. We also generalize the single objective BS model to a multiobjective one.
2015-09-01
The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.
Masonry behavior and modelling
Angelillo, Maurizio; Lourenço, Paulo B.; Milani, G.
2014-01-01
In this Chapter we present the basic experimental facts on masonry materials and introduce simple and refined models for masonry. The simple models are essentially macroscopic and based on the assumption that the material is incapable of sustaining tensile loads (No-Tension assumption). The refined models account for the microscopic structure of masonry, modeling the interaction between the blocks and the interfaces.
Numerical Modelling of Streams
Vestergaard, Kristian
In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative...
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid
Sen S; Moha N.; Baudry B.; Jezequel J.-M.
2009-01-01
International audience Large and complex meta-models such as those of Uml and its profiles are growing due to modelling and inter-operability needs of numerous stakeholders. The complexity of such meta-models has led to coining of the term meta-muddle. Individual users often exercise only a small view of a meta-muddle for tasks ranging from model creation to construction of model transformations. What is the effective meta-model that represents this view? We present a flexible meta-model p...
The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model predictions with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid
Conceptual Model for Communication
Fedaghi, Sabah Al; Fadel, Zahraa
2009-01-01
A variety of idealized models of communication systems exist, and all may have something in common. Starting with Shannons communication model and ending with the OSI model, this paper presents progressively more advanced forms of modeling of communication systems by tying communication models together based on the notion of flow. The basic communication process is divided into different spheres (sources, channels, and destinations), each with its own five interior stages, receiving, processing, creating, releasing, and transferring of information. The flow of information is ontologically distinguished from the flow of physical signals, accordingly, Shannons model, network based OSI models, and TCP IP are redesigned.
Widera, Paweł
2011-01-01
The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...
Conceptual Model for Communication
Zahra'a Fadel; Ala'a Alsaqa; Sabah Al-Fedaghi
2009-01-01
A variety of idealized models of communication systems exist, and all may have something in common. Starting with Shannon’s communication model and ending with the OSI model, this paper presents progressively more advanced forms of modeling of communication systems by tying communication models together based on the notion of flow. The basic communication process is divided into different spheres (sources, channels, and destinations), each with its own five interior stages: receiving, process...
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Visualizing Risk Prediction Models
Vanya Van Belle; Ben Van Calster
2015-01-01
Objective Risk prediction models can assist clinicians in making decisions. To boost the uptake of these models in clinical practice, it is important that end-users understand how the model works and can efficiently communicate its results. We introduce novel methods for interpretable model visualization. Methods The proposed visualization techniques are applied to two prediction models from the Framingham Heart Study for the prediction of intermittent claudication and stroke after atrial fib...
William Poole
2006-01-01
Most monetary economists today conduct their analysis within some version of a rational expectations model. A well-defined equilibrium in such a model requires that the private sector understand policy goals and the policymakers' model of the economy. An austere version of the model, with no information asymmetries, is valid only to a first approximation but nevertheless provides core insights to short- and long-run monetary policy. In this model, effective policy requires clarity of policy g...
Slavik Stefan; Bednar Richard
2014-01-01
The term business model has been used in practice for few years, but companies create, define and innovate their models subconsciously from the start of business. Our paper is aimed to clear the theory about business model, hence definition and all the components that form each business. In the second part, we create an analytical tool and analyze the real business models in Slovakia and define the characteristics of each part of business model, i.e., customers, distribution, value, resour...
Inference for Multiplicative Models
Wexler, Ydo; Meek, Christopher
2012-01-01
The paper introduces a generalization for known probabilistic models such as log-linear and graphical models, called here multiplicative models. These models, that express probabilities via product of parameters are shown to capture multiple forms of contextual independence between variables, including decision graphs and noisy-OR functions. An inference algorithm for multiplicative models is provided and its correctness is proved. The complexity analysis of the inference algorithm uses a mor...
Wortelboer FG
1994-01-01
This report contains the descriptions of the models currently used within the National Institute of Public Health and Environmental Protection (RIVM). Each model description contains the following entries: Name of the model, Contact in RIVM, Purpose, Policy theme, Technical specifications, Status, Availability, Documentation. Besides, the report contains a list of the models grouped by laboratory, a list of the models grouped by theme, and an index. The purpose of this report is both to give ...
An enhanced communication model
Flensburg, Per
2010-01-01
The concept of information is often taken for more or less granted in research about information systems. This paper introduces a model starting with Shannon and Weaver data transmission model and ends with knowledge transfer between individual persons. The model is in fact an enhanced communication model giving a framework for discussing problems in the communication process. A specific feature of the model is the aim for providing design guidelines in designing the communication process. Th...
Model Driven Language Engineering
Patrascoiu, Octavian
2005-01-01
Modeling is a most important exercise in software engineering and development and one of the current practices is object-oriented (OO) modeling. The Object Management Group (OMG) has defined a standard object-oriented modeling language the Unified Modeling Language (UML). The OMG is not only interested in modeling languages; its primary aim is to enable easy integration of software systems and components using vendor-neutral technologies. This thesis investigates the possibilities for designi...
Langseth, Helge; Nielsen, Thomas Dyhre
2005-01-01
parametric family ofdistributions. In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions of...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....
Fundamentals of Friction Modeling
Al-Bender, Farid
2010-01-01
This communication presents an overview of friction model-building, which starts from the generic mechanisms behind friction to construct models that simulate observed macroscopic friction behavior. First, basic friction properties are presented. Then, the generic friction model is outlined. Hereafter, simple heuristic/empirical models are discussed, which are suitable for quick simulation and control purposes. An example of these is the Generalized Maxwell-Slip model.
Papamakarios, George
2015-01-01
Top-performing machine learning systems, such as deep neural networks, large ensembles and complex probabilistic graphical models, can be expensive to store, slow to evaluate and hard to integrate into larger systems. Ideally, we would like to replace such cumbersome models with simpler models that perform equally well. In this thesis, we study knowledge distillation, the idea of extracting the knowledge contained in a complex model and injecting it into a more convenient model. We present a ...
Tahir Abdullah; Shahbaz Nazeer
2012-01-01
Software architecture design and requirement engineering are core and independent areas of engineering. A lot of research, education and practice are carried on Requirement elicitation and doing refine it, but it is a major issue of engineering. QSMSR model act as a bridge between requirement and design there is a huge gap between these two areas of software architecture and requirement engineering. The QSMSR model divide into two sub model qualitative model and Principal model in this resear...
Bubble models, data acquisition and model applicability
Jebavá, Marcela; Kloužek, Jaroslav; Němec, Lubomír
Vsetín : GLASS SERVICE ,INC, 2005, s. 182-191. ISBN 80-239-4687-0. [International Seminar on Mathematical Modeling and Advanced Numerical Methods in Furnace Design and Operation /8./. Velké Karlovice (CZ), 19.05.2005-20.05.2005] Institutional research plan: CEZ:AV0Z40320502 Keywords : bubble models Subject RIV: CA - Inorganic Chemistry
Standard Model Masses and Models of Nuclei
Rivero, Alejandro
2003-01-01
We note an intriguing coincidence in nuclear levels, that the subshells responsible for doubly magic numbers happen to bracket nuclei at the energies of the Standard Model bosons. This could show that these bosons actually contribute to the effective mesons of nuclear models.
Geochemistry Model Validation Report: External Accumulation Model
The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation
Pavement Aging Model by Response Surface Modeling
Manzano-Ramírez A.
2011-10-01
Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.
Model Validation Status Review
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
Model Validation Status Review
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
Cameron, Ian T.; Gani, Rafiqul
This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....
Modeling extragalactic bowshocks. I. The model.
Ferruit, P.; Binette, L.; Sutherland, R. S.; Pecontal, E.
1997-06-01
To probe the effects of the nuclear activity on the host galaxy, it is essential to disentangle the relative contribution of shock excitation from that of photoionization. One milestone towards this goal is the ability to model the bowshock structures created by the interaction of radio ejecta with their surrounding medium. We have built a bowshock model based on TDA's one (Taylor, Dyson & Axon, 1992MNRAS.255..351T) which was itself derived from an earlier work on Herbig-Haro objects. Since TDA's original model supplied the astronomers with only [OIII]λ5007 fluxes and profiles for various models of bowshocks, we undertook to include magnetic fields and to incorporate all of the atomic data tables of the code Mappings Ic for the computation of ionization states, cooling rates and line emissivities of the gas. This new model allows us to map line ratios and profiles of extragalactic bowshocks for all major lines of astrophysical interest. In this first paper, we present our model, analyse the gas behavior along the bowshock and give some examples of model results.
蒋娜; 谢有琪
2012-01-01
With the development of human society, the social hub enlarges beyond one community to the extent that the world is deemed as a community as a whole. Communication, therefore, plays an increasingly important role in our daily life. As a consequence, communication model or the definition of which is not so much a definition as a guide in communication. However, some existed communication models are not as practical as it was. This paper tries to make an overall contrast among three communication models Coded Model, Gable Communication Model and Ostensive Inferential Model, to see how they assist people to comprehend verbal and non -verbal communication.
Towards Approximate Model Transformations
Troya, Javier; Wimmer, Manuel; Vallecillo, Antonio; Burgueño, Loli
2014-01-01
As the size and complexity of models grow, there is a need to count on novel mechanisms and tools for transforming them. This is required, e.g., when model transformations need to provide target models without having access to the complete source models or in really short time—as it happens, e.g., with streaming models—or with very large models for which the transformation algorithms become too slow to be of practical use if the complete population of a model is investigated. In this pa...
Luiz Carlos Bresser-Pereira
2012-03-01
Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.
Microsoft tabular modeling cookbook
Braak, Paul te
2013-01-01
This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling
LI Zhi-jia; YAO Cheng; KONG Xiang-guang
2005-01-01
To improve the Xinanjiang model, the runoff generating from infiltration-excess is added to the model.The another 6 parameters are added to Xinanjiang model.In principle, the improved Xinanjiang model can be used to simulate runoff in the humid, semi-humid and also semi-arid regions.The application in Yi River shows the improved Xinanjiang model could forecast discharge with higher accuracy and can satisfy the practical requirements.It also shows that the improved model is reasonable.
Hansen, Mads Fogtmann; Fagertun, Jens; Larsen, Rasmus
2011-01-01
This paper presents a fusion of the active appearance model (AAM) and the Riemannian elasticity framework which yields a non-linear shape model and a linear texture model – the active elastic appearance model (EAM). The non-linear elasticity shape model is more flexible than the usual linear...... subspace model, and it is therefore able to capture more complex shape variations. Local rotation and translation invariance are the primary explanation for the additional flexibility. In addition, we introduce global scale invariance into the Riemannian elasticity framework which together with the local...
Flexible survival regression modelling
Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben
2009-01-01
time-varying effects. The introduced models are all applied to data on breast cancer from the Norwegian cancer registry, and these analyses clearly reveal the shortcomings of Cox's regression model and the need for other supplementary analyses with models such as those we present here.......Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time...
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
Modelling farmers' labour supply in CGE models
Gaasland, Ivar
2008-01-01
In most CGE models with special focus on farm policy, the on-farm wage either follows the ordinary wage in the economy or it is varies according to an assumption of sector specific farm labour. This paper demonstrates a practical and empirical consistent way to model farm household allocation of labour in CGE models, assuming that farm labour is partially sector specific. In this set up, preferences for farming and the relative wage between on-farm and off-farm work, determines the allocation...
Modeling Guru: Knowledge Base for NASA Modelers
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the
Empirical Model Building Data, Models, and Reality
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
Major Differences between the Jerome Model and the Horace Model
朱艳
2014-01-01
There are three famous translation models in the field of translation: the Jerome model, the Horace model and the Schleiermacher model. The production and development of the three models have significant influence on the translation. To find the major differences between the two western classical translation theoretical models, we discuss the Jerome model and the Hor-ace model deeply in this paper.
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)
Dissertation: Precompound Emission of Energetic Light Fragments in Spallation Reactions
Kerby, Leslie Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-04
Emission of light fragments (LF) from nuclear reactions is an open question. Different reaction mechanisms contribute to their production; the relative roles of each, and how they change with incident energy, mass number of the target, and the type and emission energy of the fragments is not completely understood. None of the available models are able to accurately predict emission of LF from arbitrary reactions. However, the ability to describe production of LF (especially at energies ≳ 30 MeV) from many reactions is important for different applications, such as cosmic-ray-induced Single Event Upsets (SEUs), radiation protection, and cancer therapy with proton and heavy-ion beams, to name just a few. The Cascade-Exciton Model (CEM) version 03.03 and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) version 03.03 event generators in Monte Carlo N-Particle Transport Code version 6 (MCNP6) describe quite well the spectra of fragments with sizes up to ⁴He across a broad range of target masses and incident energies (up to ~ 5 GeV for CEM and up to ~ 1 TeV/A for LAQGSM). However, they do not predict the high energy tails of LF spectra heavier than ⁴He well. Most LF with energies above several tens of MeV are emitted during the precompound stage of a reaction. The current versions of the CEM and LAQGSM event generators do not account for precompound emission of LF larger than ⁴He. The aim of our work is to extend the precompound model in them to include such processes, leading to an increase of predictive power of LF-production in MCNP6. This entails upgrading the Modified Exciton Model currently used at the preequilibrium stage in CEM and LAQGSM. It also includes expansion and examination of the coalescence and Fermi break-up models used in the precompound stages of spallation reactions within CEM and LAQGSM. Extending our models to include emission of fragments heavier than ⁴He at the precompound stage has indeed provided results that have much
Nonlinear Modeling by Assembling Piecewise Linear Models
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Modeling agriculture in the Community Land Model
B. Drewniak; Song, J.(Pusan National University, Pusan, South Korea); Prell, J.; Kotamarthi, V. R.; Jacob, R.
2012-01-01
The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon-nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the...
Modeling agriculture in the Community Land Model
B. Drewniak; Song, J.(Pusan National University, Pusan, South Korea); Prell, J.; Kotamarthi, V. R.; Jacob, R.
2013-01-01
The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon–nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the ne...
OPEC model : adjustment or new model
Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)
Solid Waste Projection Model: Model user's guide
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab
A dispersion model to be used off costal waters has been developed. The model has been applied to describe the migration of radionuclides in the Baltic sea. A summary of the results is presented here. (K.A.E)
... Background Information > Modeling Infectious Diseases Fact Sheet Modeling Infectious Diseases Fact Sheet Tagline (Optional) Using computers to prepare ... Content Area Predicting the potential spread of an infectious disease requires much more than simply connecting cities on ...
National Aeronautics and Space Administration — The Galactic model is a spatial and spectral template. The model for the Galactic diffuse emission was developed using spectral line surveys of HI and CO (as a...
Modeling EERE deployment programs
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to...
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Modeling in Chemical Engineering
Jaap van Brakel
2000-10-01
Full Text Available Models underlying the use of similarity considerations, dimensionless numbers, and dimensional analysis in chemical engineering are discussed. Special attention is given to the many levels at which models and ceteris paribus conditions play a role and to the modeling of initial and boundary conditions. It is shown that both the laws or dimensionless number correlations and the systems to which they apply are models. More generally, no matter which model or description one picks out, what is being modeled is itself a model of something else. Instead of saying that the artifact S models the given B, it is therefore better to say that S and B jointly make up B and S.
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
Chip Multithreaded Consistency Model
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
Lichtenberg, Jakob; Hansen, Michael Reichhardt; Rischel, Hans
This paper presents a solution to the Invoicing case study using the Standard ML programming language for modelling.......This paper presents a solution to the Invoicing case study using the Standard ML programming language for modelling....
Riis, Troels; Jørgensen, John Leif
1999-01-01
This documents describes a test of the implementation of the ASC orbit model for the Champ satellite.......This documents describes a test of the implementation of the ASC orbit model for the Champ satellite....