WorldWideScience

Sample records for models indrajit bhattacharya

  1. Nabarun Bhattacharya: An Introduction

    Directory of Open Access Journals (Sweden)

    Sourit Bhattacharya

    2015-08-01

    Full Text Available Nabarun Bhattacharya was born in Calcutta in 1948 to Bijan Bhattacharya and Mahasweta Devi, both noted literary personalities. Devi left family for a career in activism and literary writing at a very early age when Nabarun was a kid. He grew up under the guidance and immediate artistic inspiration of Bijan Bhattacharya, whose 1943 Bengal Famine based play Nabanna (New Harvest had already made a huge impact in the Indian theatre world, and his uncle Ritwik Ghatak, the noted auteur, whose films captured the despairing social conditions of poverty, joblessness, and political bankruptcy in a recently liberated nation. Nabarun’s literary writings were highly alert to the immediate social and political concerns and marked by an ideological faith in Marxism.

  2. Rabi N. Bhattacharya selected papers

    CERN Document Server

    Waymire, Edward

    2016-01-01

    This volume presents some of the most influential papers published by Rabi N. Bhattacharya, along with commentaries from international experts, demonstrating his knowledge, insight, and influence in the field of probability and its applications. For more than three decades, Bhattacharya has made significant contributions in areas ranging from theoretical statistics via analytical probability theory, Markov processes, and random dynamics to applied topics in statistics, economics, and geophysics. Selected reprints of Bhattacharya’s papers are divided into three sections: Modes of Approximation, Large Times for Markov Processes, and Stochastic Foundations in Applied Sciences. The accompanying articles by the contributing authors not only help to position his work in the context of other achievements, but also provide a unique assessment of the state of their individual fields, both historically and for the next generation of researchers. Rabi N. Bhattacharya: Selected Papers will be a valuable resource for yo...

  3. Bhattacharya, Dakshina Ranjan

    Indian Academy of Sciences (India)

    Home; Fellowship. Fellow Profile. Elected: 1940 Section: Animal Sciences. Bhattacharya, Dakshina Ranjan Ph.D., FNA, FNASc 1940-43. Date of birth: 18 January 1888. Date of death: 26 August 1956. Specialization: Cytology, Ichthyology. YouTube; Twitter; Facebook; Blog ...

  4. Bhattacharya, Prof. Alok

    Indian Academy of Sciences (India)

    Home; Fellowship. Fellow Profile. Elected: 2015 Section: General Biology. Bhattacharya, Prof. Alok Ph.D. (JNU), FNA. Date of birth: 2 February 1951. Specialization: Molecular Parasitology, Computational Genomics, Rare Genetic Disorders Address: School of Life Sciences, Jawaharlal Nehru University, New Delhi 110 067, ...

  5. Bhattacharya, Prof. Sudha

    Indian Academy of Sciences (India)

    Elected: 2001 Section: General Biology. Bhattacharya, Prof. Sudha Ph.D. (IARI, New Delhi), FNASc, FNA Council Service: 2016. Date of birth: 7 March 1952. Specialization: Molecular Biology, Molecular Parasitology and Genomics Address: Professor, School of Environmental Sciences, Jawaharlal Nehru University, New ...

  6. The Impossible Demands of Nabarun Bhattacharya

    Directory of Open Access Journals (Sweden)

    Adheesha Sarkar

    2015-08-01

    Full Text Available This article aims at understanding the character of ‘Fyataru’ that the Bengali author Nabarun Bhattacharya created. This character returns again and again in many of Nabarun’s stories and novels. It is an important signifier of Nabarun’s literary vision, since it represents the politics of dissent that the author believed in most prominently. Through the dissection of the fictional Fyataru, this article aims to understand the politics that guided Nabarun’s writings. It also tries to determine the philosophy behind this journey of fiction, the history of Nabarun’s thought, as well as its broader implications in contemporary reality. Reading Nabarun’s literature in the light of the theory of anarchism illuminates the purpose of  Fyataru, and how this fictional creature can identify its own reflection in individuals of today’s society who have tried to rebel against institutional oppression with the weapon of anarchist practices. Julian Assange, Edward Snowden, and other underground activists may well be considered as such anarchists. The article also attempts to understand why anarchy is an essential element in a society regulated by governments, capitalist institutions and corporate powers. It aims to establish that anarchy protects the freedom of expression from being thwarted by populist hegemony, and therefore protects the right of the individual to free thought and dissent. The only instrument that can prevent dominant opinions from marginalizing and throttling the formation of free ideas is subversion, and the Fyatarus of Nabarun’s literature are the harbingers of such subversion.   Keywords: Anarchism; Subversion; Journalism; Nabarun.

  7. Stability analysis of an implicitly defined labor market model

    Science.gov (United States)

    Mendes, Diana A.; Mendes, Vivaldo M.

    2008-06-01

    Until very recently, the pervasive existence of models exhibiting well-defined backward dynamics but ill-defined forward dynamics in economics and finance has apparently posed no serious obstacles to the analysis of their dynamics and stability, despite the problems that may arise from possible erroneous conclusions regarding theoretical considerations and policy prescriptions from such models. A large number of papers have dealt with this problem in the past by assuming the existence of symmetry between forward and backward dynamics, even in the case when the map cannot be invertible either forward or backwards. However, this procedure has been seriously questioned over the last few years in a series of papers dealing with implicit difference equations and inverse limit spaces. This paper explores the search and matching labor market model developed by Bhattacharya and Bunzel [J. Bhattacharya, H. Bunzel, Chaotic Planning Solution in the Textbook Model of Equilibrium Labor Market Search and Matching, Mimeo, Iowa State University, 2002; J. Bhattacharya, H. Bunzel, Economics Bulletin 5 (19) (2003) 1-10], with the following objectives in mind: (i) to show that chaotic dynamics may still be present in the model for acceptable parameter values, (ii) to clarify some open questions related with the admissible dynamics in the forward looking setting, by providing a rigorous proof of the existence of cyclic and chaotic dynamics through the application of tools from symbolic dynamics and inverse limit theory.

  8. Intra-night Optical Variability Monitoring of Fermi Blazars: First Results from 1.3 m J. C. Bhattacharya Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Paliya, Vaidehi S.; Ajello, M.; Kaur, A. [Department of Physics and Astronomy, Clemson University, Kinard Lab of Physics, Clemson, SC 29634-0978 (United States); Stalin, C. S., E-mail: vpaliya@g.clemson.edu [Indian Institute of Astrophysics, Block II, Koramangala, Bangalore-560034 (India)

    2017-07-20

    We report the first results obtained from our campaign to characterize the intra-night-optical variability (INOV) properties of Fermi detected blazars, using the observations from the recently commissioned 1.3 m J. C. Bhattacharya telescope (JCBT). During the first run, we were able to observe 17 blazars in the Bessel R filter for ∼137 hr. Using C- and scaled F -statistics, we quantify the extent of INOV and derive the duty cycle (DC), which is the fraction of time during which a source exhibits a substantial flux variability. We find a high DC of 40% for BL Lac objects and the flat spectrum radio quasars are relatively less variable (DC ∼ 15%). However, when estimated for blazars sub-classes, a high DC of ∼59% is found in low synchrotron peaked (LSP) blazars, whereas, intermediate and high synchrotron peaked objects have a low DC of ∼11% and 13%, respectively. We find evidence of the association of the high amplitude INOV with the γ -ray flaring state. We also notice a high polarization during the elevated INOV states (for the sources that have polarimetric data available), thus supporting the jet based origin of the observed variability. We plan to enlarge the sample and utilize the time availability from the small telescopes, such as 1.3 m JCBT, to strengthen/verify the results obtained in this work and those existing in the literature.

  9. Dipankar Bhattacharya

    Indian Academy of Sciences (India)

    Emerged from the following considerations. • Successful operation of IXAE piggyback instrument - proportional counters; observed signature of BH event horizon. • Spectacular success of Rossi X-ray Timing. Explorer - numerous new discoveries owing to high precision fast timing. • Experience with hard X-ray observations.

  10. Bhattacharya, Prof. Samaresh

    Indian Academy of Sciences (India)

    ordination Chemistry and Organometallic Chemistry Address: Professor, Inorganic Chemistry Section, Department of Chemistry, Jadavpur University, Kolkata 700 032, W.B.. Contact: Office: (033) 2414 6223. Residence: (033) 2431 0998. Mobile: ...

  11. An energy-balance model with multiply-periodic and quasi-chaotic free oscillations. [for climate forecasting

    Science.gov (United States)

    Bhattacharya, K.; Ghil, M.

    1979-01-01

    A slightly modified version of the one-dimensional time-dependent energy-balance climate model of Ghil and Bhattacharya (1978) is presented. The albedo-temperature parameterization has been reformulated and the smoothing of the temperature distribution in the tropics has been eliminated. The model albedo depends on time-lagged temperature in order to account for finite growth and decay time of continental ice sheets. Two distinct regimes of oscillatory behavior which depend on the value of the albedo-temperature time lag are considered.

  12. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 114; Issue 3. Significance of transition between Talchir Formation and Karharbari Formation in Lower Gondwana basin evolution — A study in West Bokaro Coal basin, Jharkhand, India. H N Bhattacharya Abhijit Chakraborty Biplab Bhattacharya. Volume 114 Issue ...

  13. Bulletin of Materials Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science. T K Bhattacharya. Articles written in Bulletin of Materials Science. Volume 26 Issue 7 December 2003 pp 703-706 Cements. Solid state sintering of lime in presence of La2O3 and CeO2 · T K Bhattacharya A Ghosh H S Tripathi S K Das · More Details Abstract Fulltext PDF.

  14. Fulltext PDF

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Bhattacharji Somdev see Chatterjee Nilanjan. 533. Two- and three-dimensional gravity modeling along western continental margin and intraplate Narmada-. Tapti rifts: Its relevance to Deccan flood basalt volcanism. 771. Bhattacharya S. Alkaline intrusion in a granulite ensemble in the. Eastern Ghats belt, India: Shear zone ...

  15. The Vagabond’s War Cry: the “Other” in Nabarun’s Narrative

    Directory of Open Access Journals (Sweden)

    Dibyakusum Ray

    2015-08-01

    Full Text Available The objective of this article is to discuss, in brief, how Nabarun Bhattacharya deals with the question of the “other.” Bhattacharya treats the ‘other’ not as a convenient literary device, but as an ever explosive, ever truant fantastic object that jumps at the face of the ‘I’ (the author and the reader and says “NO”. Bhattacharya, spanning roughly four decades, presents not only the human subaltern as a belligerent agent of chaos, but also depicts cats, dogs and lunatic footpath dwellers in a similar role. Paranormal humanoids with wings come out of his pages to shock and destroy the audience’s perception regarding what the life-world should be like. He speaks in a language that might be forever out of my comprehensibility. This essay is on the culminating phase of Bhattacharya’s prose writing career, and I wish to explain how such a culmination becomes apparent in the handling of theme, style and ideology; 1999 onwards to be more precise. The author presents the “other” as the ultimate voice of dissent and a thorough reading of his oeuvre shall make clear that he slowly removed himself from the linear assumption that once the marginalized assumes the power, all shall be equal forever. In this article, I have attempted to demonstrate  how Bhattacharya does not thrust any emancipatory role on the “other” but lets them speak freely while being on the margin, posing maybe a greater challenge and a more intense threat to normative powers—thematically and linguistically.   Keywords: Nabarun Bhattacharya; Other; Subversion; Resistance.

  16. Matter-Wave Optics of Diatomic Molecules

    Science.gov (United States)

    2012-10-23

    81.013802 10/11/2012 32.00 Swati Singh , Pierre Meystre. Atomic probe Wigner tomography of a nanomechanical system, Physical Review A, (04 2010): 41804...PhysRevA.78.041801 10/11/2012 3.00 S. Singh , M. Bhattacharya, O. Dutta, P. Meystre. Coupling Nanomechanical Cantilevers to Dipolar Molecules...degenerate matter waves, Physical Review A, (02 2009): 0. doi: 10.1103/PhysRevA.79.023622 10/11/2012 10.00 M. Bhattacharya, S. Singh , P. -L. Giscard

  17. Sudha Bhattacharya Jawaharlal Nehru University New Delhi

    Indian Academy of Sciences (India)

    TSC

    Bottom strand. 1. 2. 3. G A T C. 5. 1. 3. 4. 2. *. Nicking profile of EN on the 174 bp substrate. Insertion point. Bottom strand labelled 174bp substrate. Transposon insertion point. Endonuclease nicks at three hot spots. The transposon inserts at hot spot # 3 ...

  18. Plate tectonic reconstruction of India and Madagascar closing through the Mascarene Basin

    Digital Repository Service at National Institute of Oceanography (India)

    Shuhail, M.

    anomaly information from Mascarene Basin forced all those studies to follow the IND-ANT-AFR circuit to obtain India-Madagascar reconstruction models in their pre-drift scenario (e.g. Fig. 2a). Since those models were not accounted for the deformation....1029/2007GC001743 3. Royer et al., 2002, Geological Society of London Special Publication 195, 7-23 4. Yatheesh et al., 2006, Gondwana Research 10, 179-185. PhD with the guidance of V. Yatheesh and G.C. Bhattacharya. CSIR-NIO/UGC are acknowledged...

  19. Plate-tectonic evolution of the deep ocean basins adjoining the western continental margin of India - A proposed model for the early opening scenario

    Digital Repository Service at National Institute of Oceanography (India)

    Bhattacharya, G.C.; Yatheesh, V.

    strength with an apparent breakthrough few years later, when Bhattacharya et al. (1994a) reported the presence of short sequence of two- limbed seafloor spreading type magnetic anomalies in the Laxmi Basin sector of the WCMI-ADOB. Subsequently, Malod et... al. (1997) reported the presence of two-limbed seafloor spreading type magnetic anomalies in the Gop Basin sector of WCMI-ADOB. In the subsequent years several publications (Reeves and Leven 2001; Chatterjee et al. 2006, 2013; Bastia et al. 2010...

  20. Consumer-company Identification: Development and Validation of a Scale

    Directory of Open Access Journals (Sweden)

    Diogo Fajardo Nunes Hildebrand

    2010-07-01

    Full Text Available Consumer-Company Identification is a relatively new issue in the marketing academia. Bhattacharya and Sen(2003 explored the Social Identity theory and established Consumer-Company Identification as the primary psychological substrate for deep relationships between the organization and its customers. In the present study a new instrument was constructed and validated that permits the empirical verification of the phenomenon described by Bhattacharya and Sen (2003. The scale validated in the present study is the first to embrace the idiosyncrasies of the identification between consumers and organizations. The process was conducted through 3 independent data collections. The first one was collected using literature search and in-depth interviews with 12 undergraduate students and bachelors from different professional fields. The second data base was obtained from a survey of 226 undergraduate students from 3 universities in 2 big Brazilian cities. This data base was used for purification purposes using Explanatory Factorial Analysis. Finally, the Structural Equation Modeling technique was applied to analyze a third data base composed of 387 observations collected from the same 3 universities of the second study. The results confirm the content, convergent and discriminant validity of the new scale proposed.

  1. Numerical solution of the one-dimensional Burgers' equation ...

    Indian Academy of Sciences (India)

    Burgers' equation; exponential finite difference method; implicit exponential finite difference method ... prescribed functions of the variables. Pramana – J. ... explicit exponential finite difference method was originally developed by Bhattacharya.

  2. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Author Affiliations. Santanu Bhattacharya1 Raghavan Varadarajan2. Department of Organic Chemistry, Indian Institute of Science, Bangalore 560 012; Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560 012 ...

  3. Bulletin of Materials Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Author Affiliations. Divya Singh1 Pramod K Singh1 Nitin A Jadhav1 Bhaskar Bhattacharya1. Material Research Laboratory, School of Engineering and Technology, Sharda University, Greater Noida 201 306, India ...

  4. Resonance – Journal of Science Education | Indian Academy of ...

    Indian Academy of Sciences (India)

    Author Affiliations. Sreebrata Goswami1 Samaresh Bhattacharya2. Department of Inorganic Chemistry Indian Association for the Cultivation of Science Calcutta 700032, India. Department of Chemistry Inorganic Chemistry Section Jadavpur University Calcutta 700032, India.

  5. Syndrome-wise diagnosis status of sexually transmitted infection ...

    African Journals Online (AJOL)

    a dramatic transformation[1] medical attention is ... recognition of STDs as a major public health problem, stigma ..... We would like to thank Ms. Mohua Bhattacharya, technical staff, for ... Department of Biochemistry, Medical College, College.

  6. and Cadmium Zinc Telluride

    African Journals Online (AJOL)

    Bheema

    INTRODUCTION. Semiconductor nanoparticles or Quantum Dots (QDs), in particular II-VI materials, have ... the study of structural, electronic transport and optical properties of Zn doped CdTe thin films, ...... Bhattacharya, S.K & Anjali, K. 2007.

  7. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    sediments of Andaman-Java subduction complex · Tapan Pal Biswajit Ghosh Anindya Bhattacharya S K Bhaduri · More Details Abstract Fulltext PDF. The bedded felsic tuff exposed in Rutland Island, Andaman, consists of two facies:.

  8. J. B. S. Haldane's passage to India: reconfiguring science

    Indian Academy of Sciences (India)

    Gordon Mcouat

    2017-11-25

    Nov 25, 2017 ... the efforts to build a 'modern', democratic India emerging out of the ashes ... a postcolonial respect for traditional 'non-Western' values. Although his ..... Kolkata. Bhattacharya T. 2007 The sentinels of culture: class, education.

  9. Geophysical characteristics of the Ninetyeast Ridge–Andaman island arc/trench convergent zone

    Digital Repository Service at National Institute of Oceanography (India)

    Subrahmanyam, C.; Gireesh, R.; Chand, S.; KameshRaju, K.A.; Rao, D.G.

    ] N. Furus, Y. Kono, Slab residual gravity anomaly: gravity reduction due to subducting plates beneath the Japanese Islands, J. Geodyn. 36 (2003) 497-514. 19 [15] S Dasgupta, M. Mukhopadhyay, A. Bhattacharya, T. K. Jana, The geometry...

  10. B-Parameters of 4-Fermion Operators from Lattice QCD

    International Nuclear Information System (INIS)

    Gupta, Rajan

    1997-07-01

    This talk summarizes the status of the calculations of B K , B 7 , B 8 , and B s , done in collaboration with T. Bhattacharya, C. Kilcup, and S. Sharpe. Results for staggered, Wilson, and Clover fermions are presented

  11. Decay of Hoyle state

    Indian Academy of Sciences (India)

    2014-11-02

    Nov 2, 2014 ... T K RANA, C BHATTACHARYA, S KUNDU, ... of various direct 3α decay mechanisms of the Hoyle state. ... Pramana – J. Phys., Vol. ... FMD predicts a compact triangle shape and LEFT predicts a bent arm chain structure,.

  12. 3d Transition metal decorated B–C–N composite nanostructures for ...

    Indian Academy of Sciences (India)

    Administrator

    efficient hydrogen storage: A first-principles study. S BHATTACHARYA, C ... and renewable fuels (Schlappbach and Züttel 2001; Züttel. 2003, 2004) because of its ... kinetics as well as low binding energy that could lead to the possibility of ...

  13. hepatitis b and hiv co-infection in south africa: just treat it! clinical

    African Journals Online (AJOL)

    2009-03-16

    Mar 16, 2009 ... There are an estimated 350 million hepatitis B carriers worldwide. The prevalence of ... detection of two regions of the hepatitis virus DNA via. PCR is required for an ..... Bhattacharya D, Katzenstein D, Wong A, et al. Alanine ...

  14. Growth studies on Norway lobster, Nephrops norvegicus (L., in different areas of the Mediterranean Sea and the adjacent Atlantic

    Directory of Open Access Journals (Sweden)

    Chryssi Mytilineou

    1998-12-01

    Full Text Available A comparative study of the growth of Nephrops norvegicus among different areas in the Mediterranean Sea and the adjacent Atlantic was conducted. MIX and Bhattacharya´s length-based methods were used for age determination. Both methods were used for all the studied areas. For the estimation of the growth parameters two non-linear methods, based on the results of the length frequency analysis, were used; the Gauss-Newton method, implemented by the SAS program, was applied using the results of the MIX and the FISHPARM program using the results of the Bhattacharya´s method. The identification of the age groups and their mean lengths-at-age as well as the estimation of the growth parameters proved to be difficult. A question regarding the adequacy of the von Bertalanffy model was also posed. Remarkable differences were obvious between sexes in the number of identified age groups and their mean lengths-at-age as well as in their growth parameters in all areas. The comparison of the results obtained for the studied areas showed differences, which could not be considered very important except in the case of the Nephrops population of the Alboran Sea, which was characterised by a high growth rate. All other areas seemed to be close; among them the populations from Euboikos Gulf and Catalan Sea being the most different.

  15. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Chemical Sciences; Volume 112; Issue 3. Steric control of the coordination mode of thiosemicarbazone ligands, synthesis, structure ... Author Affiliations. Falguni Basuli1 Samaresh Bhattacharya1. Department of Chemistry, Inorganic Chemistry Section, Jadavpur University, Calcutta 700 032, India ...

  16. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    ... of X-ray transients with Scanning Sky Monitor (SSM) onboard AstroSat. M. C. RAMADEVI B. T. RAVISHANKAR ABHILASH R. SARWADE S. VAISHALI NIRMAL KUMAR IYER ANUJ NANDI V. GIRISH VIVEK KUMAR AGARWAL BLESSY ELIZABETH BABY MOHAMMED HASAN S. SEETHA DIPANKAR BHATTACHARYA.

  17. Research Paper ISSN 0189-6016©2008

    African Journals Online (AJOL)

    AJTCAM

    inflammatory, antitumor and antioxidant properties (Sharada et al., 1996; Bhattacharya et al., ... Withaferin-A (WA) was extracted and isolated from commercially available Withania somnifera root ..... Chemomodulatory efficacy of Basil leaf (Ocimum ... In vitro human phase I metabolism of xenobiotics I: pesticides and related.

  18. Associateship | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Fellowship; Associateship. Associate Profile. Period: 2016–2019. Bhattacharya, Dr Atanu Ph.D. (Colorado State). Date of birth: 2 March 1983. Specialization: Ultrafast Science, Surface Science, Molecular Beam Experiments Address: IPC Department, Indian Institute of Science, Bengaluru 560 012, Karnataka Contact:

  19. ACKNOWLEDGEMENTS

    Indian Academy of Sciences (India)

    ... Raha, Soumen Basak , Amitava Majumder, Mousumi Basu, Smarajit Polley, Raja Bhattacharya, and all other lab members. OUR COLLABORATORS: Prof. S. Roy, Bose Institute, Kolkata. Prof.M.S.Shaila, IISC, Bangalore. GRANTING AGENCIES: Dept. of Science and Technology. All India Council for Technical Education.

  20. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Neutron Star Physics in the Square Kilometre Array Era: An Indian Perspective · Sushan Konar Manjari Bagchi Debades Bandyopadhyay Sarmistha Banik Dipankar Bhattacharya Sudip Bhattacharyya R. T. Gangadhara A. Gopakumar Yashwant Gupta B. C. Joshi Yogesh Maan Chandreyee Maitra Dipanjan Mukherjee ...

  1. JCSC_128_8_1175_1189_SI.docx

    Indian Academy of Sciences (India)

    IISC

    On the Attosecond charge migration in Cl…..N, Cl…..O, Br…..N and Br…..O Halogen-bonded clusters: Effect of donor, acceptor, vibration, rotation, and electron correlation. SANKHABRATA CHANDRA, MOHAMMED MUSTHAFA IQBAL and ATANU BHATTACHARYA*. Department of Inorganic and Physical Chemistry, Indian ...

  2. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Single-nucleotide variations associated with Mycobacterium tuberculosis KwaZulu-Natal strains · Sarbashis Das Ragothaman M Yennamalli Anchal Vishnoi Parul Gupta Alok Bhattacharya · More Details Abstract Fulltext PDF. The occurrence of drug resistance in Mycobacterium tuberculosis, the aetiological agent of ...

  3. A Robust Alternative to the Normal Distribution.

    Science.gov (United States)

    1982-07-07

    for any Purpose of the United States Governuent DEPARTMENT OF STATISTICS t -, STANFORD UIVERSITY I STANFORD, CALIFORNIA A Robust Alternative to the...Stanford University Technical Report No. 3. [5] Bhattacharya, S. K. (1966). A Modified Bessel Function lodel in Life Testing. Metrika 10, 133-144

  4. Brain Damage from Soman-Induced Seizures Is Greatly Exacerbated by Dimethyl sulfoxide (DMSO): Modest Neuroprotection by 2-Aminoethyl diphenylborinate (2- APB), a Transient Receptor Potential Channel Inhibitor and Inositol 1,4,5-triphosphate Receptor Antagonist

    Science.gov (United States)

    2008-03-04

    stereotypy, and wet-dog shakes. Overt motor convulsions were characterized by rhythmic clonic jerks of head and forepaws, rearing, salivation and Straub...Dale LB, Bhattacharya M, Anborgh PH , Murdoch B, Bhatia M, Nakanishi S, Ferguson SS. G protein-coupled receptor kinase-mediated desensitization of

  5. N-S crustal shear system in the Bundelkhand massif: a unique ...

    Indian Academy of Sciences (India)

    56

    In the light of our detailed geological studies of the massif (Singh and Bhattacharya,. 2010 .... relations with, and displace, the earlier shear systems, i.e. BS1, BS2 and BS3 (Fig. 3D), (3) ..... and shear zone patterns: The South Indian case. Jour.

  6. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. SUMANA GUPTA. Articles written in Sadhana. Volume 42 Issue 10 October 2017 pp 1685-1692. Selection of keyframes for video colourization using steerable filtering · SOMDYUTI PAUL SAUMIK BHATTACHARYA SUMANA GUPTA · More Details Abstract Fulltext PDF. The appropriate selection ...

  7. to view fulltext PDF

    Indian Academy of Sciences (India)

    From Euclid to Soccer if is ... A R Rao. Chemical Research of Sir Prafulla Chandra Ray. Sreebrata Goswami and Samaresh Bhattacharya. Acharya Prafulla Chandra at the College of Science. Gurunath Mukherjee. FEATURE ARTICLES. 54 Nature Watch. Engimatic Bamboos. C K John and Rajani 5 Nadgauda.

  8. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Chemical Sciences; Volume 117; Issue 2. Rhodium and iridium complexes of N-(2'-hydroxyphenyl)pyrrole-2-aldimine: Synthesis, structure, and spectral and electrochemical properties. Semanti Basu Indrani Pal Ray J Butcher Georgina Rosair Samaresh Bhattacharya. Volume 117 Issue 2 March ...

  9. Hyper-parallel tempering Monte Carlo simulations of Ar adsorption in new models of microporous non-graphitizing activated carbon: effect of microporosity

    International Nuclear Information System (INIS)

    Terzyk, Artur P; Furmaniak, Sylwester; Gauden, Piotr A; Harris, Peter J F; Wloch, Jerzy; Kowalczyk, Piotr

    2007-01-01

    The adsorption of gases on microporous carbons is still poorly understood, partly because the structure of these carbons is not well known. Here, a model of microporous carbons based on fullerene-like fragments is used as the basis for a theoretical study of Ar adsorption on carbon. First, a simulation box was constructed, containing a plausible arrangement of carbon fragments. Next, using a new Monte Carlo simulation algorithm, two types of carbon fragments were gradually placed into the initial structure to increase its microporosity. Thirty six different microporous carbon structures were generated in this way. Using the method proposed recently by Bhattacharya and Gubbins (BG), the micropore size distributions of the obtained carbon models and the average micropore diameters were calculated. For ten chosen structures, Ar adsorption isotherms (87 K) were simulated via the hyper-parallel tempering Monte Carlo simulation method. The isotherms obtained in this way were described by widely applied methods of microporous carbon characterisation, i.e. Nguyen and Do, Horvath-Kawazoe, high-resolution α s plots, adsorption potential distributions and the Dubinin-Astakhov (DA) equation. From simulated isotherms described by the DA equation, the average micropore diameters were calculated using empirical relationships proposed by different authors and they were compared with those from the BG method

  10. Journal of Genetics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics. MD. SAIMUL ISLAM. Articles written in Journal of Genetics. Volume 95 Issue 3 September 2016 pp 551-563 RESEARCH ARTICLE. Frequent alterations of SLIT2–ROBO1–CDC42 signalling pathway in breast cancer: clinicopathological correlation · RITTWIKA BHATTACHARYA NUPUR ...

  11. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Sarbashis Das1 Ragothaman M Yennamalli1 Anchal Vishnoi1 Parul Gupta1 Alok Bhattacharya1 2. Center for Computational Biology and Bioinformatics, School of Information Technology, Jawaharlal Nehru University, New Delhi 110 067, India; School of Life Sciences, Jawaharlal Nehru University, New Delhi 110 067, ...

  12. Age and growth of dominant cichlids in Gbedikere Lake, Kogi Statr ...

    African Journals Online (AJOL)

    Sixty samples of the fish species comprising thirty Tilapia zilli and thirty Oreochromis niloticus were obtained from the Artisanal fishers from the common landing site along the lake. Age was determined from Bhattacharya's length frequency assortment method using where applicable the scale of fish and opercula bones.

  13. Sharath Ananthamurthy

    Indian Academy of Sciences (India)

    Volume 35 Issue 4 August 2012 pp 529-532. An optical tweezer-based study of antimicrobial activity of silver nanoparticles · Yogesha Sarbari Bhattacharya M K Rabinal Sharath Ananthamurthy · More Details Abstract Fulltext PDF. Understanding and characterizing microbial activity reduction in the presence of antimicrobial ...

  14. Fulltext PDF

    Indian Academy of Sciences (India)

    Arora B M, Tata Institute of Fundamental Research, Mumbai. Awana V P S, National Physical Laboratory, New Delhi. Banerjee S, Bhabha Atomic Research Centre, Mumbai. Bhattacharya S, Tata Institute of Fundamental Research, Mumbai. Chaddah P, UGC-DAE CSR, Indore. Chaplot S L, Bhabha Atomic Research Centre, ...

  15. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Journal of Astrophysics and Astronomy. 258 pages Volume 38 Issue 3 September 2017. Special issue on "Physics of Neutron Stars and Related Objects". Guest Editors: Dipankar Bhattacharya, K. S. Dwarakanath and Sushan Konar. 116 pages Volume 38 Issue 2 June 2017. Special Section on "AstroStat". Guest Editors: S ...

  16. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Astrophysics and Astronomy; Volume 38; Issue 3. Issue front cover thumbnail. Volume 38, Issue 3. September 2017. Special issue on "Physics of Neutron Stars and Related Objects". Article ID 36 Editorial. Editorial · Dipankar Bhattacharya K. S. Dwarakanath Sushan Konar · More Details Abstract ...

  17. Mafic dykes at the southwestern margin of Eastern Ghats belt ...

    Indian Academy of Sciences (India)

    Ghats belt: Evidence of rifting and collision. S Bhattacharya. 1,∗ ... 1.3 Ga, which may have been initiated by intra-plate volcanism. 1. Introduction ... tively, is described as a compressional orogen. Keywords. ... charnockite gneiss, around Naraseraopet, AP (b) Thin mafic ... Sometimes orthopyroxene also occurs at margin of.

  18. ZrO2 as a high-κ dielectric for strained SiGe MOS devices

    Indian Academy of Sciences (India)

    Author Affiliations. R Mahapatra1 G S Kar1 C B Samantaray1 A Dhar1 D Bhattacharya2 S K Ray1. Department of Physics and Meteorology, Indian Institute of Technology, Kharagpur 721 302, India; Material Science Centre, Indian Institute of Technology, Kharagpur 721 302, India ...

  19. in human sperm motility and level of calcium and magnesium

    African Journals Online (AJOL)

    J. Valsa

    2015-11-06

    Nov 6, 2015 ... Calcium carbonate (AR Grade) (Brittish Drug House,. Bombay), for standard .... able for storage of sample used for chemical study.41,42. Subjects collected .... iod indicated a serious problem even if the sperm count and original motility were ..... Bhattacharya RD. Circadian rhythm of urinary electrolytes from.

  20. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Rhodium and iridium complexes of N-(2'-hydroxyphenyl)pyrrole-2-aldimine: Synthesis, structure, and spectral and electrochemical properties · Semanti Basu Indrani Pal Ray J Butcher Georgina Rosair Samaresh Bhattacharya · More Details Abstract Fulltext PDF. Reaction of N-(2'-hydroxyphenyl)pyrrole-2-aldimine (H2L) ...

  1. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. S M Marathe. Articles written in Sadhana. Volume 25 Issue 1 February 2000 pp 57-69. Design, fabrication and performance evaluation of a 22-channel direct reading atomic emission spectrometer using inductively coupled plasma as a source of excitation · R P Shukla S S Bhattacharya D V ...

  2. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 110; Issue 4. Volume 110, Issue 4. December 2001, pages 267-463. Recent Researchers in Petrology and Geochemistry. pp 267-267. Preface · S Bhattacharya J Ganguly · More Details Fulltext PDF. pp 269-285. Earth support systems: Threatened? Why? What can ...

  3. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Felsic tuff from Rutland Island – A pyroclastic flow deposit in Miocene-sediments of Andaman-Java subduction complex · Tapan Pal Biswajit Ghosh Anindya Bhattacharya S K Bhaduri · More Details Abstract Fulltext PDF. The bedded felsic tuff exposed in Rutland Island, Andaman, consists of two facies: white massive tuff ...

  4. Bulletin of Materials Science | News

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science. M K Rabinal. Articles written in Bulletin of Materials Science. Volume 35 Issue 4 August 2012 pp 529-532. An optical tweezer-based study of antimicrobial activity of silver nanoparticles · Yogesha Sarbari Bhattacharya M K Rabinal Sharath Ananthamurthy · More Details Abstract ...

  5. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. L Sujatha. Articles written in Sadhana. Volume 34 Issue 4 August 2009 pp 643-650. Composite Si/PS membrane pressure sensors with micro and macro-porous silicon · L Sujatha Enakshi Bhattacharya · More Details Abstract Fulltext PDF. Porous Silicon (PS) is a versatile material with many ...

  6. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Astrophysics and Astronomy. Essy Samuel. Articles written in Journal of Astrophysics and Astronomy. Volume 38 Issue 2 June 2017 pp 31 Review Article. The Cadmium Zinc Telluride Imager on AstroSat · V. Bhalerao D. Bhattacharya A. Vibhute P. Pawar A. R. Rao M. K. Hingar Rakesh Khanna ...

  7. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Alok Bhattacharya1. School of Life Sciences and Information Technology, Jawaharlal Nehru University, New Delhi 110 007, India. Journal of Biosciences. Current Issue : Vol. 43, Issue 1 · Current Issue Volume 43 | Issue 1. March 2018. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Gallery of Cover Art ...

  8. Cytotoxicity of Gemcitabine-Loaded-Microemulsions in Breast and ...

    African Journals Online (AJOL)

    death using light microscope and ApopNexin FITC apoptosis detection kit. Results: Hemolysis .... were observed by phase-contrast inverted .... Figure 3: Light microscopy images showing morphological changes in HCT116 cells treated with (A) 0.03 %v/v and (B) 0.3 .... Patra CR, Bhattacharya R, Wang E, Katarya A, Lau JS,.

  9. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Astrophysics and Astronomy. N. Vagshette. Articles written in Journal of Astrophysics and Astronomy. Volume 38 Issue 2 June 2017 pp 31 Review Article. The Cadmium Zinc Telluride Imager on AstroSat · V. Bhalerao D. Bhattacharya A. Vibhute P. Pawar A. R. Rao M. K. Hingar Rakesh Khanna ...

  10. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Biosciences; Volume 27; Issue 1. Issue front cover thumbnail. Volume 27, Issue 1. February 2002, pages a-70. Genome Analysis. pp a-a. Preface · Alok Bhattacharya · More Details Fulltext PDF. pp 1-6. SWORDS: A statistical tool for analysing large DNA sequences · Probal Chaudhuri Sandip Das.

  11. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science. A Sarkar. Articles written in Journal of Earth System Science. Volume 109 Issue 1 March 2000 pp 157-169. Palaeomonsoon and palaeoproductivity records of O, C and CaCO3 variations in the northern Indian Ocean sediments · A Sarkar R Ramesh S K Bhattacharya ...

  12. Effect of Terminalia chebula fruit extract on lipid peroxidation and ...

    African Journals Online (AJOL)

    SERVER

    2007-08-20

    Aug 20, 2007 ... products mainly edible vegetables and spices, have a key role in chemopreventers ... protein; dunit/minute/mg protein ; eµg/mg protein; fn moles of H2O2 ... induce peroxidation of cell membrane lipids (Bhattacharya et al., 1999). .... catalase – like activities in seminal plasma and spermatozoa. Int. J. Androl.

  13. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Small-angle neutron scattering study of aggregate structures of multi-headed pyridinium surfactants in aqueous solution · J Haldar V K Aswal P S Goyal S Bhattacharya · More Details Abstract Fulltext PDF. The aggregate structures of a set of novel single-chain surfactants bearing one, two and three pyridinium headgroups ...

  14. Fulltext PDF

    Indian Academy of Sciences (India)

    Unknown

    Bagchi B see Mukherjee A. 393. Baghmar M. Kinetics and mechanism of oxidation of aliphatic alcohols by tetrabutylammonium tribromide. 139. Balasubramanian S see Krishnan M. 579. Banerji K K see Goswami G. 43. Bhardwaj R K see Pal A. 215. Bhattacharya A K see Tantri S P. 681. Bhattacharyya S see Mukherjee A.

  15. An Integrated Approach to Biology

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 8. An Integrated Approach to Biology. Aniket Bhattacharya. General Article Volume 16 Issue 8 August 2011 pp 742-753. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/016/08/0742-0753 ...

  16. R P Shukla

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. R P Shukla. Articles written in Sadhana. Volume 25 Issue 1 February 2000 pp 57-69. Design, fabrication and performance evaluation of a 22-channel direct reading atomic emission spectrometer using inductively coupled plasma as a source of excitation · R P Shukla S S Bhattacharya D V Udupa ...

  17. Journal of Earth System Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 111; Issue 4. Volume 111, Issue 4. December 2002, pages 379-510. pp 379-390. Isotopic and sedimentological clues to productivity change in Late Riphean Sea: A case study from two intracratonic basins of India · P P Chakraborty A Sarkar S K Bhattacharya P ...

  18. Journal of Chemical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    S Bhattacharya1 A Kirwai1 Aditya N Panda1 2 H-D Meyer3. Department of Chemistry, Indian Institute of Technology Guwahati, Guwahati 781 039, India; Department of Chemistry, École Normale Supérieure, Paris, 24 rue Lhomond, 75231 Paris CEDEX 05, France; Theoretische Chemie, Universität Heidelberg, ...

  19. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. D V Udupa. Articles written in Sadhana. Volume 25 Issue 1 February 2000 pp 57-69. Design, fabrication and performance evaluation of a 22-channel direct reading atomic emission spectrometer using inductively coupled plasma as a source of excitation · R P Shukla S S Bhattacharya D V Udupa ...

  20. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    C Bhattacharya. Articles written in Pramana – Journal of Physics. Volume 57 Issue 1 July 2001 pp 203-207 Contributed Papers : Nuclear reactions. Deformation effects in theSi +C andSi +Si ... Volume 61 Issue 3 September 2003 pp 529-538 Research Articles. Fusion of light exotic nuclei at near-barrier energies: Effect of ...

  1. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Astrophysics and Astronomy. S. V. Vadawale. Articles written in Journal of Astrophysics and Astronomy. Volume 38 Issue 2 June 2017 pp 31 Review Article. The Cadmium Zinc Telluride Imager on AstroSat · V. Bhalerao D. Bhattacharya A. Vibhute P. Pawar A. R. Rao M. K. Hingar Rakesh Khanna ...

  2. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics. S Sarkar. Articles written in Pramana – Journal of Physics. Volume 57 Issue 1 July 2001 pp 165-169 Contributed Papers : Nuclear spectroscopy. Level structures ofMo – A comparative study · J M Chatterjee M Saha Sarkar S Bhattacharya P Banerjee S Sarkar R P Singh S ...

  3. Journal of Astrophysics and Astronomy | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Astrophysics and Astronomy. P. Priya. Articles written in Journal of Astrophysics and Astronomy. Volume 38 Issue 2 June 2017 pp 31 Review Article. The Cadmium Zinc Telluride Imager on AstroSat · V. Bhalerao D. Bhattacharya A. Vibhute P. Pawar A. R. Rao M. K. Hingar Rakesh Khanna A. P. K. ...

  4. AUTHOR INDEX

    Indian Academy of Sciences (India)

    Oza A T. Non-equilibrium and band tailing in organic conductors. 535. Pal Dipali. The extent of strangeness equilibration in quark gluon plasma. 1083. Pal S see Bhattacharya P. 1017. Pandey B P. Thermal condensation mode in a dusty plasma. 491. Pant D N. Bianchi type I string cosmologies. 433. Parhi S see Pandey B P.

  5. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics. SHILPI JAIN. Articles written in Pramana – Journal of Physics. Volume 87 Issue 3 September 2016 pp 35 Special Issue. A review of the discovery of SM-like Higgs boson in H → γ γ decay channel with the CMS detector at the LHC · SATYAKI BHATTACHARYA SHILPI JAIN.

  6. Resonance – Journal of Science Education | News

    Indian Academy of Sciences (India)

    pp 28-41 General Article. The Football - From Euclid to Soccer it is... A R Rao · More Details Fulltext PDF. pp 42-49 General Article. Chemical Research of Sir Prafulla Chandra Rây · Sreebrata Goswami Samaresh Bhattacharya · More Details Fulltext PDF. pp 50-53 General Article. Acharya Prafulla Chandra at the College of ...

  7. M Saha Sarkar

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics. M Saha Sarkar. Articles written in Pramana – Journal of Physics. Volume 57 Issue 1 July 2001 pp 165-169 Contributed Papers : Nuclear spectroscopy. Level structures ofMo – A comparative study · J M Chatterjee M Saha Sarkar S Bhattacharya P Banerjee S Sarkar R P Singh ...

  8. Magnetic Fields of Neutron Stars

    Indian Academy of Sciences (India)

    Sushan Konar

    2017-09-12

    Sep 12, 2017 ... the material properties of the region where currents supporting the .... 1The evolution of magnetic field in neutron stars, in particular, the question of .... −10, 10. −9, 10. −8. M⊙/yr respec- tively. See Konar & Bhattacharya (1997) for details. Peq ≃ 1.9 ms ..... ported by a grant (SR/WOS-A/PM-1038/2014) from.

  9. 29th Mid-year meeting

    Indian Academy of Sciences (India)

    09.30-10.10, Session 1A – Special Lecture Dipankar Bhattacharya, IUCAA, Pune The astrosat mission, View PDF file (3.7Mbytes) View Video. 10.10-13.00, Session 1B – Lectures by Fellows/Associates. 10.10-10.30, P. B. Sunil Kumar, IIT, Chennai, View PDF file (3.8Mbytes) View Video. 10.35-10.55, T. Punniyamurthy, IIT, ...

  10. Bulletin of Materials Science | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Mullite retains the usual orthorhombic habit of sillimanite. Rounded to sub rounded zirconia dispersed within the mullite matrix of the sample ZA is noticed. Volume 26 Issue 7 December 2003 pp 703-706 Cements. Solid state sintering of lime in presence of La2O3 and CeO2 · T K Bhattacharya A Ghosh H S Tripathi S K Das.

  11. Advances in industrial ergonomics and safety II

    Energy Technology Data Exchange (ETDEWEB)

    Das, B [ed.; Technical University of Nova Scotia, Halifax, NS (Canada). Dept. of Industrial Engineering

    1990-01-01

    135 papers were presented at the conference in 20 sessions with the following headings: aging and industrial performance; back injury and rehabilitation; bioinstrumentation and electromyography; cumulative trauma disorders; engineering anthropometry; equipment design and ergonomics; human computer interaction; human performance and worker satisfaction; human strength and testing; industrial accidents and prevention; industrial biomechanics; injuries in health care; manual materials handling; noise and vibration effects; occupational health and safety; robotics and agricultural machinery safety; statistics and modelling in ergonomics; work environment; workplace safety analysis; and workstation design. Papers are included entitled: A model for analyzing mining machine illumination systems' by R.L. Unger, A.F. Glowacki and E.W. Rossi, 'Ergonomic design guidelines for underground coal mining equipment by E.J. Conway and R. Unger, and Hot work environment and human strain - a relation proposed by K. Bhattacharya and S. Raja.

  12. Advances in industrial ergonomics and safety II

    Energy Technology Data Exchange (ETDEWEB)

    Das, B. (ed.) (Technical University of Nova Scotia, Halifax, NS (Canada). Dept. of Industrial Engineering)

    1990-01-01

    135 papers were presented at the conference in 20 sessions with the following headings: aging and industrial performance; back injury and rehabilitation; bioinstrumentation and electromyography; cumulative trauma disorders; engineering anthropometry; equipment design and ergonomics; human computer interaction; human performance and worker satisfaction; human strength and testing; industrial accidents and prevention; industrial biomechanics; injuries in health care; manual materials handling; noise and vibration effects; occupational health and safety; robotics and agricultural machinery safety; statistics and modelling in ergonomics; work environment; workplace safety analysis; and workstation design. Papers are included entitled: A model for analyzing mining machine illumination systems' by R.L. Unger, A.F. Glowacki and E.W. Rossi, 'Ergonomic design guidelines for underground coal mining equipment by E.J. Conway and R. Unger, and Hot work environment and human strain - a relation proposed by K. Bhattacharya and S. Raja.

  13. Acetylcholinesterase activity in marine gastropods as biomarker of neurotoxic contaminants

    Digital Repository Service at National Institute of Oceanography (India)

    Sarkar, A.; Gaitonde, D.C.S.; Vashistha, D.

    stream_size 22135 stream_content_type text/plain stream_name Environ_Pollut_Ecol_impacts_Health_Issu_Mgmt_100.pdf.txt stream_source_info Environ_Pollut_Ecol_impacts_Health_Issu_Mgmt_100.pdf.txt Content-Encoding UTF-8 Content...-Type text/plain; charset=UTF-8 Environmental Pollution Ecological Impacts, Health Issues and Management Complied and Edited by Dr. Badal Bhattacharya, Prof. Arabinda Ghosh & Prof. Shyamal Kumar Majumdar Mudrakar 18A, Radhanath Mullick...

  14. Novel drugs in the management of acute mountain sickness and high altitude pulmonary edema

    OpenAIRE

    Gaurav Sikri, Gaurav; Bhattacharya,Anirban

    2015-01-01

    Gaurav Sikri, Anirban Bhattacharya Department of Physiology, Armed Forces Medical College, Wanowarie, Pune, IndiaWe read with great interest the review article titled “Wilderness medicine at high altitude: recent developments in the field” by Shah et al.1 The authors have comprehensively summarized the recent advances in the field of high altitude medicine relevant to sports and travel medicine. However, Shah et al have described potential drugs for management of high-alti...

  15. Pension Reforms in India: Myth, Reality and Policy Choices

    OpenAIRE

    Gupta Ramesh

    2002-01-01

    Escalating costs of the pension system is forcing the Indian Government to reevaluate the formal programmes that provide social security to employees. The government has so far received three official reports (namely, OASIS, IRDA and Bhattacharya), which have examined the issue and suggested several measures to provide a safety net to the aging population. This paper examines the recommendations made in these reports and analyses the potential effects of them. It is organized around five poli...

  16. Optimization of GaN Nanorod Growth Conditions for Coalescence Overgrowth

    Science.gov (United States)

    2016-02-04

    21, 2016 PI: Chih-Chung (C. C.) Yang, ccycc@ntu.edu.tw Graduate Institute of Photonics and Optoelectronics, National Taiwan University...nanowire light emitting diodes grown on (001) silicon by molecular beam epitaxy, Nano Lett. 10 (2010) 3355-3359. [16] W. Guo, A. Banerjee, P...Bhattacharya, B.S. Ooi, InGaN/GaN disk-in-nanowire white light emitting diodes on (001) silicon , Appl. Phys. Lett. 98 (2011) 193102. [17] H.P.T. Nguyen, M

  17. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  18. Techniques applied in design optimization of parallel manipulators

    CSIR Research Space (South Africa)

    Modungwa, D

    2011-11-01

    Full Text Available the desired dexterous workspace " Robot.Comput.Integrated Manuf., vol. 23, pp. 38 - 46, 2007. [12] A.P. Murray, F. Pierrot, P. Dauchez and J.M. McCarthy, "A planar quaternion approach to the kinematic synthesis of a parallel manipulator " Robotica, vol... design of a three translational DoFs parallel manipulator " Robotica, vol. 24, pp. 239, 2005. [15] J. Angeles, "The robust design of parallel manipulators," in 1st Int. Colloquium, Collaborative Research Centre 562, 2002. [16] S. Bhattacharya, H...

  19. Nature of the crust in the Laxmi Basin (14°-20°N), western continental margin of India

    Digital Repository Service at National Institute of Oceanography (India)

    Krishna, K.S.; Rao, D.G.; Sar, D.

    for determining the crust below the shelf, Laxmi Basin and Western Basin. 3. Crustal structure ? associated gravity and magnetic anomalies In the present study we have integrated the new datasets with published geophysical data: Conrad 1707 profiles (Naini..., 1980), SK- 12, 22, 50, 64 and 79 profiles (Bhattacharya et al., 1994a; Chaubey et al., 2002) and twelve long-range sonobuoy refraction stations (Naini and Talwani, 1983) (Figure 1) for carrying out integrated interpretation of the data. 3.1 Previous...

  20. Gyroidal nanoporous carbons - Adsorption and separation properties explored using computer simulations

    Directory of Open Access Journals (Sweden)

    S. Furmaniak

    2016-02-01

    Full Text Available Adsorption and separation properties of gyroidal nanoporous carbons (GNCs - a new class of exotic nanocarbon materials are studied for the first time using hyper parallel tempering Monte Carlo Simulation technique. Porous structure of GNC models is evaluated by the method proposed by Bhattacharya and Gubbins. All the studied structures are strictly microporous. Next, mechanisms of Ar adsorption are described basing on the analysis of adsorption isotherms, enthalpy plots, the values of Henry’s constants, α_{s} and adsorption potential distribution plots. It is concluded that below pore diameters ca. 0.8 nm, primary micropore filling process dominates. For structures possessing larger micropores, primary and secondary micropore filling mechanism is observed. Finally, the separation properties of GNC toward CO_{2}/CH_{4}, CO_{2}/N_{2}, and CH_{4}/N_{2} mixtures are discussed and compared with separation properties of Virtual Porous Carbon models. GNCs may be considered as potential adsorbents for gas mixture separation, having separation efficiency similar or even higher than activated carbons with similar diameters of pores.

  1. Vibrio Parahaemolyticus: The Threat of Another Vibrio Acquiring Pandemic Potential

    Digital Repository Service at National Institute of Oceanography (India)

    Ramamurthy, T.; Nair, G.B.

    investigations of Vibrio parahaemolyticus in oysters following outbreaks in Washington, Texas, and New York. (1997 and 1998). Appl. Envrion. Microbiol. 66, 4649- 4654. DePaola, A., Ulaszek, J., Kaysner, C. A., Tenge, B. J., Nordstrom, J. L., Wells, J., Puhr, N...-710. Andrews, L. S., DeBlanc, S., Veal, C. D., Park, D. L., 2003. Response of Vibrio parahaemolyticus O3:K6 to a hot water/ cold shock pasteurization process. Food Addit. Contam. 20, 331-334. Bag, P. K., Nandi, S., Bhadra, R. K., Ramamurthy, T., Bhattacharya, S...

  2. Managing Inflections in Life and Career: Tale from a Physicist

    Science.gov (United States)

    Bhattacharya, Santanu

    2010-03-01

    By training, a physicist possesses one of the rarest qualities ever imparted in an educational degree program, namely, the ability to take on complex problems, divide them into ``solvable'' parts, derive solutions and put them back as insightful outputs. Dr Bhattacharya, CEO of Salorix, a research, analytics and consulting firm, explains how he has used these skills learned at the graduate school to build a career as a scientist, management consultant and entrepreneur. He will also speak about how the real-life skillsets of understanding and dealing with ``Inflections'', self discovery and introspection can be a great tool for managing one's life and career progression.

  3. BRAIN vol. 4 (2013, issues 1-4, first 4 pages

    Directory of Open Access Journals (Sweden)

    Bogdan Patrut

    2013-10-01

    Full Text Available TABLE OF CONTENTS Sections BRAINStorming and BRAINovations 1. Evolving Spiking Neural Networks for Control of Artificial Creatures 5Arash Ahmadi 2. Artificial Neuron Modelling Based on Wave Shape 20Kieran Greer 3. Brain-Like Artificial Intelligence for Automation – Foundations, Concepts andImplementation Examples 26Rosemarie Velik 4. Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation 55Tushar H Jaware and Dr. K B Khanchandani 5. High Performance Data mining by Genetic Neural Network 60Dadmehr Rahbari 6. Isomorphism Between Estes’ Stimulus Fluctuation Model and a Physical-Chemical System 71Makoto Yamaguchi 7. Intelligent Continuous Double Auction method For Service Allocation in Cloud Computing 74Nima Farajian, Kamran Zamanifar 8. An Enhancement Over Texture Feature Based Multiclass Image Classification UnderUnknown Noise 84Ajay Kumar Singh, V P Shukla, Shamik Tiwari and S R Biradar 9. Suicide: Neurochemical Approaches 97Ritabrata Banerjee, Anup K. Ghosh, Balaram Ghosh, Somnath Bhattacharya and Amal C. Mondal 10. L1 Transfer in Post-Verbal Preposition: An Inter-level Comparison 125Samira Mollaei, Ali Jahangard and Hemaseh Bagheri Section BRAINotes 11. Looking for Oriental fundamentals Fuzzy Logic 141Ángel Garrido and Piedad Yuste Instructions for authors 146

  4. Cancer Antigen Prioritization: A Road Map to Work in Defining Vaccines Against Specific Targets. A Point of View

    International Nuclear Information System (INIS)

    Gomez, Daniel E.; Vázquez, Ana María; Alonso, Daniel F.

    2012-01-01

    The use of anti-idiotype antibodies as vaccines to stimulate antitumor immunity is a very promising pathway in the therapy of cancer. A good body of work in animal tumor models have demonstrated the efficacy of anti-Id vaccines in preventing tumor growth and curing mice with established tumors. A number of monoclonal anti-Id antibodies that mimic different human tumor-associated antigens (TAAs) have been developed and tested in the clinic, demonstrating interesting. In general terms, the antigen mimicry by anti-Id antibodies has reflected structural homology in the most of the cases, and amino acid sequence homology in a minority of them. The major challenge of immunotherapy using anti-idiotype vaccines is to identify the optimal anti-idiotype antibody that will function as a true surrogate antigen for a TAA system, and ideally will generate both humoral and cellular immune responses. Several clinical studies have shown enhanced patient's survival when receiving anti-Id vaccines, the true demonstration of efficacy of these vaccines will depend upon the results of several randomized Phase III clinical trials that are currently planned or ongoing (Bhattacharya-Chatterjee et al.,).

  5. Cancer Antigen Prioritization: A Road Map to Work in Defining Vaccines Against Specific Targets. A Point of View

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Daniel E. [Laboratory of Molecular Oncology, Quilmes National University, Buenos Aires (Argentina); Vázquez, Ana María [Center of Molecular Immunology, La Habana (Cuba); Alonso, Daniel F., E-mail: degomez@unq.edu.ar [Laboratory of Molecular Oncology, Quilmes National University, Buenos Aires (Argentina)

    2012-06-28

    The use of anti-idiotype antibodies as vaccines to stimulate antitumor immunity is a very promising pathway in the therapy of cancer. A good body of work in animal tumor models have demonstrated the efficacy of anti-Id vaccines in preventing tumor growth and curing mice with established tumors. A number of monoclonal anti-Id antibodies that mimic different human tumor-associated antigens (TAAs) have been developed and tested in the clinic, demonstrating interesting. In general terms, the antigen mimicry by anti-Id antibodies has reflected structural homology in the most of the cases, and amino acid sequence homology in a minority of them. The major challenge of immunotherapy using anti-idiotype vaccines is to identify the optimal anti-idiotype antibody that will function as a true surrogate antigen for a TAA system, and ideally will generate both humoral and cellular immune responses. Several clinical studies have shown enhanced patient's survival when receiving anti-Id vaccines, the true demonstration of efficacy of these vaccines will depend upon the results of several randomized Phase III clinical trials that are currently planned or ongoing (Bhattacharya-Chatterjee et al.,).

  6. The Measurement and Interpretation of Transformation Temperatures in Nitinol

    Science.gov (United States)

    Duerig, T. W.; Pelton, A. R.; Bhattacharya, K.

    2017-12-01

    A previous paper (Duerig and Bhattacharya in Shap Mem Superelasticity 1:153-161, 2015) introduced several engineering considerations surrounding the R-phase in Nitinol and highlighted a common, if not pervasive, misconception regarding the use of the term Af by the medical device industry. This paper brings additional data to bear on the issue and proposes more accurate terminology. Moreover, a variety of tools are used to establish the forward and reverse stress-temperature phase diagrams for a superelastic wire typical of that used in medical devices. Once established, the two most common methods of measuring transformation temperatures, Differential Scanning Calorimetry and Bend Free Recovery, are tested against the observed behavior. Light is also shed upon the origin of the Clausius-Clapeyron ratio (d σ/d T), the triple point, and why such large variations are reported in superelastic alloys.

  7. Editorial: Global Opportunities and Local Businesses

    Directory of Open Access Journals (Sweden)

    Krzysztof Wach

    2013-03-01

    Full Text Available The theme of the first issue is very multi‐disciplinary. It links economics with management by exploring global opportunities through particular local businesses. The European Union, as all of Europe, is now facing grand global challenges that primarily relate to economic issues. As stipulated by H. Sirkin, J. Hemerling and A. Bhattacharya in their world‐famous book: Globality: Competing with Everyone from Everywhere for Everything, in the resent future the European, American and Japanese firms will compete not only with each, other but increasingly with very competitive Chinese, Indian, South American, and even African firms, which currently may seem farfetched (Kotler & Caslione, 2009, p. 29. We believe it is extremely important to recognise global opportunities, which have resulted from globalisation, internationalisation and Europeanisation processes (Wach, 2012, pp. 137‐150 and 298‐299.

  8. Private sector, for-profit health providers in low and middle income countries: can they reach the poor at scale?

    Science.gov (United States)

    Tung, Elizabeth; Bennett, Sara

    2014-06-24

    The bottom of the pyramid concept suggests that profit can be made in providing goods and services to poor people, when high volume is combined with low margins. To-date there has been very limited empirical evidence from the health sector concerning the scope and potential for such bottom of the pyramid models. This paper analyzes private for-profit (PFP) providers currently offering services to the poor on a large scale, and assesses the future prospects of bottom of the pyramid models in health. We searched published and grey literature and databases to identify PFP companies that provided more than 40,000 outpatient visits per year, or who covered 15% or more of a particular type of service in their country. For each included provider, we searched for additional information on location, target market, business model and performance, including quality of care. Only 10 large scale PFP providers were identified. The majority of these were in South Asia and most provided specialized services such as eye care. The characteristics of the business models of these firms were found to be similar to non-profit providers studied by other analysts (such as Bhattacharya 2010). They pursued social rather than traditional marketing, partnerships with government, low cost/high volume services and cross-subsidization between different market segments. There was a lack of reliable data concerning these providers. There is very limited evidence to support the notion that large scale bottom of the pyramid models in health offer good prospects for extending services to the poor in the future. In order to be successful PFP providers often require partnerships with government or support from social health insurance schemes. Nonetheless, more reliable and independent data on such schemes is needed.

  9. Análise Multitemporal da Cobertura Florestal da Microbacia do Arroio Grande, Santa Maria, RS Multitemporal Analysis of Forest Cover in the Arroio Grande Small Hydrological Basin, Santa Maria, RS

    Directory of Open Access Journals (Sweden)

    Joel Juliano Kleinpaul

    2011-03-01

    Full Text Available

    Este trabalho teve como objetivo realizar uma análise multitemporal da cobertura florestal da microbacia do Arroio Grande, Santa Maria, RS. Foram utilizadas quatro imagens de satélite: LANDSAT 5 (1987, LANDSAT 5 (1995, LANDSAT 7 (2002 e CBERS 2 (2005. As imagens foram classificadas empregando-se o algoritmo Bhattacharya. Após a classificação das imagens, foi realizado o cruzamento dos mapas temáticos. Como resultado, obteve-se mapas com os seguintes usos da terra: cobertura florestal, regeneração e desmatamento, ou seja, as florestas que permaneceram inalteradas de uma época para outra, as que regeneraram e as que foram desmatadas. Para um período de 18 anos, a cobertura florestal aumentou 10,24% na área da microbacia, passando de 14.135,42 ha (40,01% em 1987 para 17.752,20 ha (50,25% em 2005. Isto ocorreu devido à entrada em vigor do Código Florestal Estadual, à conscientização dosproprietários rurais e à implantação em maiorescala de povoamentos de espécies exóticas no estado. 
    This work aims to carry out a multitemporal analysis of the forest cover of Arroio Grande small hydrological basin located in Santa Maria, RS, Brazil. Four satellite images were used: LANDSAT 5 (1987, LANDSAT 5 (1995, LANDSAT 7 (2002 and CBERS 2 (2005. The images were classified according to the Bhattacharya algorithm. After the classification of such images, the crossing of the thematic maps was accomplished. Maps with different land uses were obtained: unaffected forest cover, regeneration and deforestation for the period 1987 – 2005. During 18 years, the forest cover increased 10,24% in the basin area, changing from 14.135,42 ha (40,01% in 1987 to 17.752,20 ha (50,25% in 2005. This happened because the State Forest Law has become effective as well as the rural owners’ consciousness and also due to increment of exotic forest plantations in the State. 

  10. Evidencia internacional del comportamiento de compra del consumidor frente a iniciativas de RSE y competencias corporativas de la empresa

    Directory of Open Access Journals (Sweden)

    Percy Samoel Marquina Feldman

    2013-04-01

    Full Text Available El estadio de la investigación sobre Responsabilidad Social Empresarial (RSE puede encontrarse en una etapa de madurez pero no necesariamente es así con las respuestas de los consumidores frente a sus iniciativas. Los resultados de los estudios que prueban la relación entre ambas son controversiales. Por un lado, algunos indican que existe una relación positiva entre las acciones de RSE y la reacción de los consumidores para con dicha empresa y sus productos (Ellen, Webb, & Mohr, 2006; Carvalho, Sen, Mota, & Carneiro, 2010; por el otro, algunas investigaciones afirman que esta relación no siempre es directa y evidente, dejando a numerosos factores  la tarea de explicar los efectos de las actividades de RSE de la empresa en las intenciones de compra de los consumidores (Carrigan & Attalla, 2001; Maignan & Ferrell, 2004. Parece existir entonces una controversia entre qué es lo relevante y qué no lo es, a la hora de explicar por qué los consumidores compran productos con atributos de RSE (Devinney, Auger, Eckhardt, & Birtchnell, 2006; Arredondo, Maldonado, & De la Garza, 2010. Auger, Burke, Devinney, y Louviere (2003 tratan de aclarar la polémica al señalar las deficiencias de algunos estudios que si bien ordenan y priorizan la importancia de tópicos de RSE, evitan incluir medidas de trade-off entre atributos tradicionales y atributos de RSE; de este modo, los resultados no pueden mostrar por qué los consumidores prefieren productos con atributos de RSE en comparación a otros (Fan, 2005. Las consecuencias prácticas de establecer un vínculo claro entre las acciones de RSE y las respuestas de los consumidores frente a estas son muchas. Un vínculo positivo entre RSE y las elecciones del consumidor estimula a las empresas a hacer grandes inversiones en RSE (Mittal, 2008, desplazando el debate de la RSE de "si" al "cómo" (Bhattacharya & Sen, 2004. Además, el apoyo a la RSE no sólo afecta a los motivos de compra y lealtad, sino

  11. Optimized energy of spectral CT for infarct imaging: Experimental validation with human validation.

    Science.gov (United States)

    Sandfort, Veit; Palanisamy, Srikanth; Symons, Rolf; Pourmorteza, Amir; Ahlman, Mark A; Rice, Kelly; Thomas, Tom; Davies-Venn, Cynthia; Krauss, Bernhard; Kwan, Alan; Pandey, Ankur; Zimmerman, Stefan L; Bluemke, David A

    Late contrast enhancement visualizes myocardial infarction, but the contrast to noise ratio (CNR) is low using conventional CT. The aim of this study was to determine if spectral CT can improve imaging of myocardial infarction. A canine model of myocardial infarction was produced in 8 animals (90-min occlusion, reperfusion). Later, imaging was performed after contrast injection using CT at 90 kVp/150 kVpSn. The following reconstructions were evaluated: Single energy 90 kVp, mixed, iodine map, multiple monoenergetic conventional and monoenergetic noise optimized reconstructions. Regions of interest were measured in infarct and remote regions to calculate contrast to noise ratio (CNR) and Bhattacharya distance (a metric of the differentiation between regions). Blinded assessment of image quality was performed. The same reconstruction methods were applied to CT scans of four patients with known infarcts. For animal studies, the highest CNR for infarct vs. myocardium was achieved in the lowest keV (40 keV) VMo images (CNR 4.42, IQR 3.64-5.53), which was superior to 90 kVp, mixed and iodine map (p = 0.008, p = 0.002, p energy in conjunction with noise-optimized monoenergetic post-processing improves CNR of myocardial infarct delineation by approximately 20-25%. Published by Elsevier Inc.

  12. Removal of Cr6 + and Ni2+ from aqueous solution using bagasse and fly ash.

    Science.gov (United States)

    Rao, M; Parwate, A V; Bhole, A G

    2002-01-01

    Raw bagasse and fly ash, the waste generated in sugar mills and boilers respectively have been used as low-cost potential adsorbents. Raw bagasse was pretreated with 0.1N NaOH followed by 0.1N CH3COOH before its application. These low-cost adsorbents were used for the removal of chromium and nickel from an aqueous solution. The kinetics of adsorption and extent of adsorption at equilibrium are dependent on the physical and chemical characteristics of the adsorbent, adsorbate and experimental system. The effect of hydrogen ion concentration, contact time, sorbent dose, initial concentrations of adsorbate and adsorbent and particle size on the uptake of chromium and nickel were studied in batch experiments. The Sorption data has been correlated with Langmuir, Freundlich and Bhattacharya and Venkobachar adsorption models. The efficiencies of adsorbent materials for the removal of Cr(VI) and Ni(II) were found to be between 56.2 and 96.2% and 83.6 and 100%, respectively. These results were obtained at the optimized conditions of pH, contact time, sorbent dose, sorbate concentration of 100 mg/l and with the variation of adsorbent particles size between 0.075 and 4.75 mm. The order of selectivity is powdered activated carbon > bagasse > fly ash for Cr(VI) removal and powdered activated carbon > fly ash > bagasse for Ni(II) removal.

  13. Novel drugs in the management of acute mountain sickness and high altitude pulmonary edema

    Directory of Open Access Journals (Sweden)

    Sikri G

    2015-12-01

    Full Text Available Gaurav Sikri, Anirban Bhattacharya Department of Physiology, Armed Forces Medical College, Wanowarie, Pune, IndiaWe read with great interest the review article titled “Wilderness medicine at high altitude: recent developments in the field” by Shah et al.1 The authors have comprehensively summarized the recent advances in the field of high altitude medicine relevant to sports and travel medicine. However, Shah et al have described potential drugs for management of high-altitude illnesses, such as acute mountain sickness (AMS, high altitude cerebral edema, and high altitude pulmonary edema (HAPE as one group under the section “Novel drug treatment for AMS”. The pathophysiologies of these two sets of diseases (AMS/high altitude cerebral edema as one and HAPE as another set are different2 and hence it would have been nice to have had the novel drugs described separately to elucidate the therapeutic approach for the two different classes of diseases.View original paper by Shah et al.

  14. Continuous quantum measurement and the quantum to classical transition

    International Nuclear Information System (INIS)

    Bhattacharya, Tanmoy; Habib, Salman; Jacobs, Kurt

    2003-01-01

    While ultimately they are described by quantum mechanics, macroscopic mechanical systems are nevertheless observed to follow the trajectories predicted by classical mechanics. Hence, in the regime defining macroscopic physics, the trajectories of the correct classical motion must emerge from quantum mechanics, a process referred to as the quantum to classical transition. Extending previous work [Bhattacharya, Habib, and Jacobs, Phys. Rev. Lett. 85, 4852 (2000)], here we elucidate this transition in some detail, showing that once the measurement processes that affect all macroscopic systems are taken into account, quantum mechanics indeed predicts the emergence of classical motion. We derive inequalities that describe the parameter regime in which classical motion is obtained, and provide numerical examples. We also demonstrate two further important properties of the classical limit: first, that multiple observers all agree on the motion of an object, and second, that classical statistical inference may be used to correctly track the classical motion

  15. Simple Closed-Form Expression for Penning Reaction Rate Coefficients for Cold Molecular Collisions by Non-Hermitian Time-Independent Adiabatic Scattering Theory.

    Science.gov (United States)

    Pawlak, Mariusz; Ben-Asher, Anael; Moiseyev, Nimrod

    2018-01-09

    We present a simple expression and its derivation for reaction rate coefficients for cold anisotropic collision experiments based on adiabatic variational theory and time-independent non-Hermitian scattering theory. We demonstrate that only the eigenenergies of the resulting one-dimensional Schrödinger equation for different complex adiabats are required. The expression is applied to calculate the Penning ionization rate coefficients of an excited metastable helium atom with molecular hydrogen in an energy range spanning from hundreds of kelvins down to the millikelvin regime. Except for trivial quantities like the masses of the nuclei and the bond length of the diatomic molecule participating in the collision, one needs as input data only the complex potential energy surface (CPES). In calculations, we used recently obtained ab initio CPES by D. Bhattacharya et al. ( J. Chem. Theory Comput. 2017 , 13 , 1682 - 1690 ) without fitting parameters. The results show good accord with current measurements ( Nat. Phys. 2017 , 13 , 35 - 38 ).

  16. Deconstructing brain-derived neurotrophic factor actions in adult brain circuits to bridge an existing informational gap in neuro-cell biology

    Directory of Open Access Journals (Sweden)

    Heather Bowling

    2016-01-01

    Full Text Available Brain-derived neurotrophic factor (BDNF plays an important role in neurodevelopment, synaptic plasticity, learning and memory, and in preventing neurodegeneration. Despite decades of investigations into downstream signaling cascades and changes in cellular processes, the mechanisms of how BDNF reshapes circuits in vivo remain unclear. This informational gap partly arises from the fact that the bulk of studies into the molecular actions of BDNF have been performed in dissociated neuronal cultures, while the majority of studies on synaptic plasticity, learning and memory were performed in acute brain slices or in vivo. A recent study by Bowling-Bhattacharya et al., measured the proteomic changes in acute adult hippocampal slices following treatment and reported changes in proteins of neuronal and non-neuronal origin that may in concert modulate synaptic release and secretion in the slice. In this paper, we place these findings into the context of existing literature and discuss how they impact our understanding of how BDNF can reshape the brain.

  17. Clinical Characteristics and Current Therapies for Inherited Retinal Degenerations

    Science.gov (United States)

    Sahel, José-Alain; Marazova, Katia; Audo, Isabelle

    2015-01-01

    Inherited retinal degenerations (IRDs) encompass a large group of clinically and genetically heterogeneous diseases that affect approximately 1 in 3000 people (>2 million people worldwide) (Bessant DA, Ali RR, Bhattacharya SS. 2001. Molecular genetics and prospects for therapy of the inherited retinal dystrophies. Curr Opin Genet Dev 11: 307–316.). IRDs may be inherited as Mendelian traits or through mitochondrial DNA, and may affect the entire retina (e.g., rod–cone dystrophy, also known as retinitis pigmentosa, cone dystrophy, cone–rod dystrophy, choroideremia, Usher syndrome, and Bardet-Bidel syndrome) or be restricted to the macula (e.g., Stargardt disease, Best disease, and Sorsby fundus dystrophy), ultimately leading to blindness. IRDs are a major cause of severe vision loss, with profound impact on patients and society. Although IRDs remain untreatable today, significant progress toward therapeutic strategies for IRDs has marked the past two decades. This progress has been based on better understanding of the pathophysiological pathways of these diseases and on technological advances. PMID:25324231

  18. Feedback-induced bistability of an optically levitated nanoparticle: A Fokker-Planck treatment

    Science.gov (United States)

    Ge, Wenchao; Rodenburg, Brandon; Bhattacharya, M.

    2016-08-01

    Optically levitated nanoparticles have recently emerged as versatile platforms for investigating macroscopic quantum mechanics and enabling ultrasensitive metrology. In this paper we theoretically consider two damping regimes of an optically levitated nanoparticle cooled by cavityless parametric feedback. Our treatment is based on a generalized Fokker-Planck equation derived from the quantum master equation presented recently and shown to agree very well with experiment [B. Rodenburg, L. P. Neukirch, A. N. Vamivakas, and M. Bhattacharya, Quantum model of cooling and force sensing with an optically trapped nanoparticle, Optica 3, 318 (2016), 10.1364/OPTICA.3.000318]. For low damping, we find that the resulting Wigner function yields the single-peaked oscillator position distribution and recovers the appropriate energy distribution derived earlier using a classical theory and verified experimentally [J. Gieseler, R. Quidant, C. Dellago, and L. Novotny, Dynamic relaxation of a levitated nanoparticle from a non-equilibrium steady state, Nat. Nano. 9, 358 (2014), 10.1038/nnano.2014.40]. For high damping, in contrast, we predict a double-peaked position distribution, which we trace to an underlying bistability induced by feedback. Unlike in cavity-based optomechanics, stochastic processes play a major role in determining the bistable behavior. To support our conclusions, we present analytical expressions as well as numerical simulations using the truncated Wigner function approach. Our work opens up the prospect of developing bistability-based devices, characterization of phase-space dynamics, and investigation of the quantum-classical transition using levitated nanoparticles.

  19. Crecimiento y mortalidad natural del pez Haemulon aurolineatum (Teleostei: Haemulidae del suroeste de la isla de Margarita, Venezuela

    Directory of Open Access Journals (Sweden)

    Edwis Bravo

    2009-09-01

    .87. From the frequency data, a preliminary asymptotic length (L∞ was estimated applying the routine of Powell and Wetherall, and the coefficient of growth (k through ELEFAN I using program FISAT II (FAO-ICLARM. The modal progression analysis was used, previous decomposition of the frequency of lengths according to Bhattacharya, and it optimized the estimations of L∞ and k according to the procedure of Gulland and Holt. The curve of growth in length was fit to the model of von Bertalanffy, and described anexponential curve, where growth accelerated until the two years of age. The rate of natural mortality was high (M = 1.15 year-1, probably from high predation. Rev. Biol. Trop. 57 (3: 699-706. Epub 2009 September 30.

  20. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    Science.gov (United States)

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  1. A high precision semi-analytic mass function

    Energy Technology Data Exchange (ETDEWEB)

    Del Popolo, Antonino [Dipartimento di Fisica e Astronomia, University of Catania, Viale Andrea Doria 6, I-95125 Catania (Italy); Pace, Francesco [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL (United Kingdom); Le Delliou, Morgan, E-mail: adelpopolo@oact.inaf.it, E-mail: francesco.pace@manchester.ac.uk, E-mail: delliou@ift.unesp.br [Instituto de Física Teorica, Universidade Estadual de São Paulo (IFT-UNESP), Rua Dr. Bento Teobaldo Ferraz 271, Bloco 2—Barra Funda, 01140-070 São Paulo, SP Brazil (Brazil)

    2017-03-01

    In this paper, extending past works of Del Popolo, we show how a high precision mass function (MF) can be obtained using the excursion set approach and an improved barrier taking implicitly into account a non-zero cosmological constant, the angular momentum acquired by tidal interaction of proto-structures and dynamical friction. In the case of the ΛCDM paradigm, we find that our MF is in agreement at the 3% level to Klypin's Bolshoi simulation, in the mass range M {sub vir} = 5 × 10{sup 9} h {sup −1} M {sub ⊙}–−5 × 10{sup 14} h {sup −1} M {sub ⊙} and redshift range 0 ∼< z ∼< 10. For z = 0 we also compared our MF to several fitting formulae, and found in particular agreement with Bhattacharya's within 3% in the mass range 10{sup 12}–10{sup 16} h {sup −1} M {sub ⊙}. Moreover, we discuss our MF validity for different cosmologies.

  2. Planned approaches to business and school partnerships. Does it make a difference? The business perspective.

    Science.gov (United States)

    Lee, Kerry; Hope, John; Abdulghani, Fatima

    2016-04-01

    In many countries, schools are encouraged to link with business to add authenticity to learning. The number of these business-school partnerships has shown a marked increase over the last twenty years. Traditionally researchers investigating these partnerships have focussed on the schools' perspectives (Du, Bhattacharya, & Sen, 2010, pp. 32-33), however this New Zealand research has focused solely on the business perspective of established school partnerships. The study used a mixed methods approach utilising both online survey and semi-structured interviews. Ten out of the forty participating businesses surveyed used a brokering organisation as a way of developing and maintaining these partnerships and some developed rationales to support the partnership. This study investigated the value of using brokering organisations, rationales and designated staff to support business-school partnerships. Findings indicate that brokers and designated staff play a very effective role in enhancing business-school links, and more benefits are perceived when a rationale has been established. It is anticipated that these findings will support the development and success of business-school partnerships. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  4. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    Science.gov (United States)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  5. Optimizing strategies to improve interprofessional practice for veterans, part 1

    Directory of Open Access Journals (Sweden)

    Bhattacharya SB

    2014-04-01

    Full Text Available Shelley B Bhattacharya,1–3 Michelle I Rossi,1,2 Jennifer M Mentz11Geriatric Research Education and Clinical Center (GRECC, Veteran's Affairs Pittsburgh Healthcare System, 2University of Pittsburgh Medical Center, Pittsburgh, PA, USA; 3Albert Schweitzer Fellowship Program, Pittsburgh, PA, USAIntroduction: Interprofessional patient care is a well-recognized path that health care systems are striving toward. The Veteran's Affairs (VA system initiated interprofessional practice (IPP models with their Geriatric Evaluation and Management (GEM programs. GEM programs incorporate a range of specialties, including but not limited to, medicine, nursing, social work, physical therapy and pharmacy, to collaboratively evaluate veterans. Despite being a valuable resource, they are now faced with significant cut-backs, including closures. The primary goal of this project was to assess how the GEM model could be optimized at the Pittsburgh, Pennsylvania VA to allow for the sustainability of this important IPP assessment. Part 1 of the study evaluated the IPP process using program, patient, and family surveys. Part 2 examined how well the geriatrician matched patients to specialists in the GEM model. This paper describes Part 1 of our study.Methods: Three strategies were used: 1 a national GEM program survey; 2 a veteran/family satisfaction survey; and 3 an absentee assessment.Results: Twenty-six of 92 programs responded to the GEM IPP survey. Six strategies were shared to optimize IPP models throughout the country. Of the 34 satisfaction surveys, 80% stated the GEM clinic was beneficial, 79% stated their concerns were addressed, and 100% would recommend GEM to their friends. Of the 24 absentee assessments, the top three reasons for missing the appointments were transportation, medical illnesses, and not knowing/remembering about the appointment. Absentee rate diminished from 41% to 19% after instituting a reminder phone call policy.Discussion: Maintaining the

  6. Recombinant follitropin alfa/lutropin alfa in fertility treatment

    Directory of Open Access Journals (Sweden)

    Ahmed Gibreel

    2009-12-01

    Full Text Available Ahmed Gibreel1, Siladitya Bhattacharya21School of Medicine and Dentistry, University of Aberdeen; 2Aberdeen Maternity Hospital, Aberdeen, UKAbstract: Recombinant human follicle stimulating hormone (rFSH and luteinizing hormone (LH, also known as follitropin alpha and lutropin alpha, are manufactured by genetic engineering techniques which ensure high quality and batch to batch consistency. Follitropin alpha can be used for controlled ovarian hyperstimulation in assisted reproduction, ovulation induction for WHO group I and II anovulatory infertility and in men with hypogonadotrophic hypogonadism (HH or idiopathic oligo-asthenospermia. Current evidence suggests superiority of urinary human menopausal gonadotropin (HMG over follitropin alpha in controlled ovarian hyperstimulation for IVF in terms of live birth rate per couple. Addition of lutropin to follitropin alpha in an unselected IVF population does not appear to confer any benefit; however, it may have a role in ovulation induction in women with hypothalamic hypogonadism. Urinary HMG preparations (especially currently available highly purified preparations are more cost effective than rFSH in terms of cost per ongoing pregnancy. However, women using rFSH injection pen devices have higher levels of satisfaction as compared to those using urinary HMG by means of conventional syringes.Keywords: infertility, follicle stimulating hormone, luteinizing hormone, follitropin alpha, lutropin alpha, in-vitro fertilization, urinary gonadotrophins

  7. Responsabilidade Social Corporativa: Um Estudo do Processo de Comunicação dos Varejistas Brasileiros

    Directory of Open Access Journals (Sweden)

    Lucas Sciencia do Prado

    2010-10-01

    Full Text Available Este trabalho teve como objetivo principal discutir e analisar como os dez maiores varejistas brasileiros comunicam suas ações de responsabilidade social. Foi desenvolvido um referencial teórico sobre a evolução do conceito da responsabilidade social corporativa (RSC, discutindo a teoria dos stakeholders e o conceito da RSC estratégica. A pesquisa realizada teve caráter exploratório qualitativo. Foram analisados os dez maiores varejistas brasileiros de 2008, seguindo a estrutura de comunicação de ações de RSC proposta no trabalho de Du, Bhattacharya e Sem, para a análise dos resultados. Os dados foram coletados a partir de sites, relatórios de sustentabilidade das empresas e mídias relacionadas. Os resultados mostraram que o modelo proposto pelos autores pode ser considerado um bom referencial para a construção de mensagens no tocante a RSC. Uma contribuição para a evolução da estrutura seria a criação um mecanismo de feedback, que poderia facilitar a compreensão das mensagens pelos seus diversos públicos. DOI: 10.5585/remark.v9i2.2142

  8. Crecimiento y mortalidad del pez Haemulon aurolineatum (Teleostei: Haemulidae en el suroeste de la isla de Margarita, Venezuela

    Directory of Open Access Journals (Sweden)

    Edwis Bravo

    2009-06-01

    Full Text Available Se determinó el crecimiento y la mortalidad natural del pez cují (Haemulon aurolineatum del suroeste de la Isla de Margarita (julio 2005 -junio 2006, para lo cual se analizó una muestra constituida por 2 541 ejemplares recolectados de la pesca artesanal de Boca del Río. La relación talla-peso entre machos y hembras no mostró diferencias significativas en las pendientes "b" (p>0.05; ts=-1.69 ni en los interceptos "a" (p>0.05; ts=-1.01, por lo que se estableció una relación común para ambos sexos: P=0.038*LT2.87. A partir de los datos de distribución de frecuencia de tallas se estimó la longitud asintótica ( aplicando la rutina de Powell-Wetherall, y el coeficiente de crecimiento (k a través de la rutina ELEFAN I (Gayanilo et al. 1996. Posteriormente se empleó el análisis de la progresión modal, previa descomposición de la frecuencia de longitudes de acuerdo al método de Bhattacharya (1967, y se optimizaron las estimaciones de y k según el procedimiento de Gulland y Holt (1959. Los parámetros de crecimiento estimados (L8=24.2 cm y k=0.48 año-1 mostraron un crecimiento moderadamente rápido. Los datos de frecuencias de longitudes fueron ajustados al modelo de von Bertalanffy (1960, indicando una tendencia de tipo exponencial: crecimiento acelerado hasta los 2 años de edad, que luego se hizo lento hasta que el pez alcanzó la longitud máxima. La tasa de mortalidad natural fue alta (M=1.15 año-1, probablemente por alta depredación.Growth and mortality of the fish Haemulon aurolineatum (Teleostei: Haemulidae from Southwest of Margarita Island, Venezuela. We analyzed the growth and natural mortality of the fish known locally as cují (H. aurolineatum in the southwest of Margarita Island, July 2005 to June 2006. A sample of 1 378 males and 1 143 females from artisanal fishing vessels of Boca del Río was analyzed. The common relation for both sexes was expressed by the equation P=0.038*L2.87. The asymptotic length ( was

  9. Análise das modificações da cobertura vegetal da planície fluvial do alto rio Paraná no período entre 1976 e 2007 = Analysis of vegetation changes in the Paraná river floodplain between 1976 and 2007

    Directory of Open Access Journals (Sweden)

    Edivando Vitor Couto

    2011-04-01

    Full Text Available A vegetação da planície fluvial do rio Paraná sofre a pressão da ocupação antrópica desde a década de 1950. A área próxima a Porto Rico (Paraná tem sido estudada desde os anos 1980, mas não há estudos dirigidos à distribuição espacial da vegetação. Os objetivos deste trabalho são cartografar as unidades vegetais e verificar a evolução temporal de sua distribuição entre 1976 e 2007. Para isso foram utilizadas imagens orbitais de 1976 (LANDSAT MSS-1, 1987 (LANDSAT TM-5, 2000 (LANDSAT ETM+7 e de 2007 (CBERS-2 CCD. Asimagens foram georreferenciadas e classificadas pelo algoritmo Bhattacharya. Foram identificadas três classes de vegetação, corpos de água e áreas de solo exposto. A análise multitemporal demonstrou aumento contínuo da área de solo exposto, aumento da área dos corpos de água, mudança de posição das áreas úmidas no período entre 1976 e 1987 e modificações mais sutis de ambas as classes no período entre 1987 e 2007. As áreas de vegetação arbórea diminuíram em quase todos os anos. Tais dados permitem afirmar que a atuação antrópica sobre a planície vem se ampliando e que as principais cheias modificaram sua morfologia.The removal of vegetation from the Paraná river floodplain has been increasing since the 1950s the until present time, but no study has been made about its spatial distribution. The purpose of this paper is to map and analyze the temporal evolution of vegetation distribution over the Parana river floodplain between 1976 and 2007. Orbital images from 1976 (LANDSAT MSS-1, 1987 (LANDSAT TM-5, 2000 (LANDSAT ETM+7 and 2007 (CBERS-2 CCD were utilized to compare the evolution of the vegetation distribution over that time period. Theimages were georeferenced using SPRING 4.3.3 software and classified by the Bhattacharya algorithm; the maps were produced using the Global Mapper 7.4 software. Three different classes of vegetation, water bodies and exposed soil areas were

  10. The Integration of Bacteriorhodopsin Proteins with Semiconductor Heterostructure Devices

    Science.gov (United States)

    Xu, Jian

    2008-03-01

    Bioelectronics has emerged as one of the most rapidly developing fields among the active frontiers of interdisciplinary research. A major thrust in this field is aimed at the coupling of the technologically-unmatched performance of biological systems, such as neural and sensing functions, with the well developed technology of microelectronics and optoelectronics. To this end we have studied the integration of a suitably engineered protein, bacteriorhodopsin (BR), with semiconductor optoelectronic devices and circuits. Successful integration will potentially lead to ultrasensitive sensors with polarization selectivity and built-in preprocessing capabilities that will be useful for high speed tracking, motion and edge detection, biological detection, and artificial vision systems. In this presentation we will summarize our progresses in this area, which include fundamental studies on the transient dynamics of photo-induced charge shift in BR and the coupling mechanism at protein-semiconductor interface for effective immobilizing and selectively integrating light sensitive proteins with microelectronic devices and circuits, and the device engineering of BR-transistor-integrated optical sensors as well as their applications in phototransceiver circuits. Work done in collaboration with Pallab Bhattacharya, Jonghyun Shin, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI; Robert R. Birge, Department of Chemistry, University of Connecticut, Storrs, CT 06269; and György V'ar'o, Institute of Biophysics, Biological Research Center of the Hungarian Academy of Science, H-6701 Szeged, Hungary.

  11. Serpine2 deficiency results in lung lymphocyte accumulation and bronchus-associated lymphoid tissue formation.

    Science.gov (United States)

    Solleti, Siva Kumar; Srisuma, Sorachai; Bhattacharya, Soumyaroop; Rangel-Moreno, Javier; Bijli, Kaiser M; Randall, Troy D; Rahman, Arshad; Mariani, Thomas J

    2016-07-01

    Serine proteinase inhibitor, clade E, member 2 (SERPINE2), is a cell- and extracellular matrix-associated inhibitor of thrombin. Although SERPINE2 is a candidate susceptibility gene for chronic obstructive pulmonary disease, the physiologic role of this protease inhibitor in lung development and homeostasis is unknown. We observed spontaneous monocytic-cell infiltration in the lungs of Serpine2-deficient (SE2(-/-)) mice, beginning at or before the time of lung maturity, which resulted in lesions that resembled bronchus-associated lymphoid tissue (BALT). The initiation of lymphocyte accumulation in the lungs of SE2(-/-) mice involved the excessive expression of chemokines, cytokines, and adhesion molecules that are essential for BALT induction, organization, and maintenance. BALT-like lesion formation in the lungs of SE2(-/-) mice was also associated with a significant increase in the activation of thrombin, a recognized target of SE2, and excess stimulation of NF-κB, a major regulator of chemokine expression and inflammation. Finally, systemic delivery of thrombin rapidly stimulated lung chemokine expression in vivo These data uncover a novel mechanism whereby loss of serine protease inhibition leads to lung lymphocyte accumulation.-Solleti, S. K., Srisuma, S., Bhattacharya, S., Rangel-Moreno, J., Bijli, K. M., Randall, T. D., Rahman, A., Mariani, T. J. Serpine2 deficiency results in lung lymphocyte accumulation and bronchus-associated lymphoid tissue formation. © FASEB.

  12. Report in the Energy and Intensity Frontiers, and Theoretical at Northwestern University

    Energy Technology Data Exchange (ETDEWEB)

    Velasco, Mayda [Northwestern Univ., Evanston, IL (United States); Schmitt, Michael [Northwestern Univ., Evanston, IL (United States); deGouvea, Andre [Northwestern Univ., Evanston, IL (United States); Low, Ian [Northwestern Univ., Evanston, IL (United States); Petriello, Frank [Northwestern Univ., Evanston, IL (United States); Schellman, Heidi [Northwestern Univ., Evanston, IL (United States)

    2016-03-31

    The Northwestern (NU) Particle Physics (PP) group involved in this report is active on all the following priority areas: Energy and Intensity Frontiers. The group is lead by 2 full profs. in experimental physics (Schmitt and Velasco), 3 full profs. in theoretical physics (de Gouvea, Low and Petriello), and Heidi Schellman who is now at Oregon State. Low and Petriello hold joint appointments with the HEP Division at Argonne National Laboratory. The theoretical PP research focuses on different aspects of PP phenomenology. de Gouvea dedicates a large fraction of his research efforts to understanding the origin of neutrino masses, neutrino properties and uncovering other new phenomena, and investigating connections between neutrino physics and other aspects of PP. Low works on Higgs physics as well as new theories beyond the Standard Model. Petriello pursues a research program in precision QCD and its associated collider phenomenology. The main goal of this effort is to improve the Standard Model predictions for important LHC observables in order to enable discoveries of new physics. In recent years, the emphasis on experimental PP at NU has been in collider physics. NU expands its efforts in new directions in both the Intensity and the Cosmic Frontiers (not discussed in this report). In the Intensity Frontier, Schmitt has started a new effort on Mu2e. He was accepted as a collaborator in April 2015 and is identified with important projects. In the Energy Frontier, Hahn, Schmitt and Velasco continue to have a significant impact and expanded their CMS program to include R&D for the real-time L1 tracking trigger and the high granularity calorimeter needed for the high-luminosity LHC. Hahn is supported by an independent DOE Career Award and his work will not be discussed in this document. The NU analysis effort includes searches for rare and forbidden decays of the Higgs bosons, Z boson, top quark, dark matter and other physics beyond the standard model topics. Four

  13. Adaptive and active materials: selected papers from the ASME 2013 Conference on Smart Materials, Adaptive Structures and Intelligent Systems (SMASIS 13) (Snowbird, UT, USA, 16-18 September 2013)

    Science.gov (United States)

    Johnson, Nancy; Naguib, Hani; Turner, Travis; Anderson, Iain; Bassiri-Gharb, Nazanin; Daqaq, Mohammed; Baba Sundaresan, Vishnu; Sarles, Andy

    2014-10-01

    The sixth annual meeting of the ASME Smart Materials, Adaptive Structures and Intelligent Systems Conference (SMASIS) was held in the beautiful mountain encircled Snowbird Resort and Conference Center in Little Cottonwood Canyon near Salt Lake City, Utah. It is the conference's objective to provide an up-to-date overview of research trends in the entire field of smart materials systems in a friendly casual forum conducive to the exchange of ideas and latest results. As each year we strive to grow and offer new experiences, this year we included special focused topic tracks on nanoscale multiferroic materials and origami engineering. The cross-disciplinary emphasis was reflected in keynote speeches by Professor Kaushik Bhattacharya (California Institute of Technology) on 'Cyclic Deformation and the Interplay between Phase Transformation and Plasticity in Shape Memory Alloys', by Professor Alison Flatau (University of Maryland at College Park) on 'Structural Magnetostrictive Alloys: The Other Smart Material', and by Dr Leslie Momoda (Director of the Sensors and Materials Laboratories, HRL Laboratories, LLC, Malibu, CA) on 'Architecturing New Functional Materials: An Industrial Perspective'. SMASIS 2013 was divided into seven symposia which span basic research, applied technological design and development, and industrial and governmental integrated system and application demonstrations. SYMP 1. Development and Characterization of Multifunctional Materials. SYMP 2. Mechanics and Behavior of Active Materials. SYMP 3. Modeling, Simulation and Control of Adaptive Systems. SYMP 4. Integrated System Design and Implementation. SYMP 5. Structural Health Monitoring. SYMP 6. Bioinspired Smart Materials and Systems. SYMP 7. Energy Harvesting. Authors of selected papers in the materials areas (symposia 1, 2, and 6) as well as energy harvesting (symposium 7) were invited to write a full journal article on their presentation topic for publication in this special issue of Smart

  14. Model-model Perencanaan Strategik

    OpenAIRE

    Amirin, Tatang M

    2005-01-01

    The process of strategic planning, used to be called as long-term planning, consists of several components, including strategic analysis, setting strategic direction (covering of mission, vision, and values), and action planning. Many writers develop models representing the steps of the strategic planning process, i.e. basic planning model, problem-based planning model, scenario model, and organic or self-organizing model.

  15. Model-to-model interface for multiscale materials modeling

    Energy Technology Data Exchange (ETDEWEB)

    Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)

    2017-12-17

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.

  16. Models and role models.

    Science.gov (United States)

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  17. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  18. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  19. Two-dimensional collagen-graphene as colloidal templates for biocompatible inorganic nanomaterial synthesis

    Directory of Open Access Journals (Sweden)

    Kumari D

    2017-05-01

    Full Text Available Divya Kumari,1,* Lubna Sheikh,1,* Soumya Bhattacharya,1,* Thomas J Webster,2 Suprabha Nayar1 1Materials Science and Technology Division, CSIR-National Metallurgical Laboratory, Burmamines, Jamshedpur, India; 2Department of Chemical Engineering, Northeastern University, Boston, MA, USA *These authors contributed equally to this work Abstract: In this study, natural graphite was first converted to collagen-graphene composites and then used as templates for the synthesis of nanoparticles of silver, iron oxide, and hydroxyapatite. X-ray diffraction did not show any diffraction peaks of graphene in the composites after inorganic nucleation, compared to the naked composite which showed (002 and (004 peaks. Scanning electron micrographs showed lateral gluing/docking of these composites, possibly driven by an electrostatic attraction between the positive layers of one stack and negative layers of another, which became distorted after inorganic nucleation. Docking resulted in single layer-like characteristics in certain places, as seen under transmission electron microscopy, but sp2/sp3 ratios from Raman analysis inferred three-layer composite formation. Strain-induced folding of these layers into uniform clusters at the point of critical nucleation, revealed beautiful microstructures under scanning electron microscopy. Lastly, cell viability studies using 3-(4,5-dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide assays showed the highest cell viability for the collagen-graphene-hydroxyapatite composites. In this manner, this study provided – to the field of nanomedicine – a new process for the synthesis of several nanoparticles (with low toxicity of high interest for numerous medical applications. Keywords: composites, graphene, collagen, lateral gluing, and inorganic nanoparticles

  20. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  1. Upper mixed layer temperature anomalies at the North Atlantic storm-track zone

    Science.gov (United States)

    Moshonkin, S. N.; Diansky, N. A.

    1995-10-01

    Synoptic sea surface temperature anomalies (SSTAs) were determined as a result of separation of time scales smaller than 183 days. The SSTAs were investigated using daily data of ocean weather station C (52.75°N; 35.5°W) from 1 January 1976 to 31 December 1980 (1827 days). There were 47 positive and 50 negative significant SSTAs (lifetime longer than 3 days, absolute value greater than 0.10 °C) with four main intervals of the lifetime repetitions: 1. 4-7 days (45% of all cases), 2. 9-13 days (20-25%), 3. 14-18 days (10-15%), and 4. 21-30 days (10-15%) and with a magnitude 1.5-2.0 °C. An upper layer balance model based on equations for temperature, salinity, mechanical energy (with advanced parametrization), state (density), and drift currents was used to simulate SSTA. The original method of modelling taking into account the mean observed temperature profiles proved to be very stable. The model SSTAs are in a good agreement with the observed amplitudes and phases of synoptic SSTAs during all 5 years. Surface heat flux anomalies are the main source of SSTAs. The influence of anomalous drift heat advection is about 30-50% of the SSTA, and the influence of salinity anomalies is about 10-25% and less. The influence of a large-scale ocean front was isolated only once in February-April 1978 during all 5 years. Synoptic SSTAs develop just in the upper half of the homogeneous layer at each winter. We suggest that there are two main causes of such active sublayer formation: 1. surface heat flux in the warm sectors of cyclones and 2. predominant heat transport by ocean currents from the south. All frequency functions of the ocean temperature synoptic response to heat and momentum surface fluxes are of integral character (red noise), though there is strong resonance with 20-days period of wind-driven horizontal heat advection with mixed layer temperature; there are some other peculiarities on the time scales from 5.5 to 13 days. Observed and modelled frequency functions

  2. Vector models and generalized SYK models

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)

    2017-05-23

    We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  3. [Bone remodeling and modeling/mini-modeling.

    Science.gov (United States)

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  4. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  5. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  6. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  7. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  8. Inter-cohort growth patterns of pharaoh cuttlefish Sepia pharaonis (Sepioidea: Sepiidae) in Eastern Arabian Sea.

    Science.gov (United States)

    Sasikumarl, Geetha; Mohamed, K S; Bhat, U S

    2013-03-01

    Sepia pharaonis is an important commercial species endemic to the tropical Indo-Pacific region. Despite its commercial significance, only few information on natural populations is available. This study was aimed to describe the aspects of size-composition, length-weight relationship, catch rates, seasonal recruitment and inter-cohort growth patterns of S. pharaonis population (Clade C), distributed along the Eastern Arabian Sea (South-West coast of India). For this, the Dorsal Mantle Length (DML) and weight of cuttlefishes was obtained from commercial trawl catches, from April 2002 to October 2006. Data was analyzed by normal length-weight methods such as von Bertalanffy. A total of 12454 cuttlefishes, ranging in length from four to 41cm were analyzed. Size-composition patterns discriminated two pulses in recruitment to the fishery, discernible by a decrease in the monthly mean size of the population. The DMLs of the two seasonal cohorts were subjected to modal-progression analysis using the Bhattacharya's method for the estimation of growth. The estimated parameters Linfinity and K in von Bertalanffy Growth Function (VBGF) were used to model growth curves in length for the cohorts. The first cohort, (post-monsoon cohort) which supports the major fishery, was composed of medium-sized, fast growing individuals, whereas the second cohort (pre-monsoon cohort), comprised of slow growing and large-sized individuals. There were differential growth characteristics between the sexes and the life span was estimated at less than 2.3 years for males and 2.1 years for females. Negative allometric growth in weight (W) with length (L) was observed for males (W=0.33069.L2.5389) and females (W=0.32542.L26057). The females were heavier compared to males at any given mantle length, and the males were found to attain larger ultimate lengths. The major fishing season for cuttlefish was from May to November, when higher monthly catch rates of 1.67-13.02kg/h were observed in comparison

  9. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  10. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  11. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  12. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  13. Models Archive and ModelWeb at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  14. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  15. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    , and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  16. Modeling energy-economy interactions using integrated models

    International Nuclear Information System (INIS)

    Uyterlinde, M.A.

    1994-06-01

    Integrated models are defined as economic energy models that consist of several submodels, either coupled by an interface module, or embedded in one large model. These models can be used for energy policy analysis. Using integrated models yields the following benefits. They provide a framework in which energy-economy interactions can be better analyzed than in stand-alone models. Integrated models can represent both energy sector technological details, as well as the behaviour of the market and the role of prices. Furthermore, the combination of modeling methodologies in one model can compensate weaknesses of one approach with strengths of another. These advantages motivated this survey of the class of integrated models. The purpose of this literature survey therefore was to collect and to present information on integrated models. To carry out this task, several goals were identified. The first goal was to give an overview of what is reported on these models in general. The second one was to find and describe examples of such models. Other goals were to find out what kinds of models were used as component models, and to examine the linkage methodology. Solution methods and their convergence properties were also a subject of interest. The report has the following structure. In chapter 2, a 'conceptual framework' is given. In chapter 3 a number of integrated models is described. In a table, a complete overview is presented of all described models. Finally, in chapter 4, the report is summarized, and conclusions are drawn regarding the advantages and drawbacks of integrated models. 8 figs., 29 refs

  17. On the role of model structure in hydrological modeling : Understanding models

    NARCIS (Netherlands)

    Gharari, S.

    2016-01-01

    Modeling is an essential part of the science of hydrology. Models enable us to formulate what we know and perceive from the real world into a neat package. Rainfall-runoff models are abstract simplifications of how a catchment works. Within the research field of scientific rainfall-runoff modeling,

  18. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    Science.gov (United States)

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  19. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.

  20. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  2. Comparison: Binomial model and Black Scholes model

    Directory of Open Access Journals (Sweden)

    Amir Ahmad Dar

    2018-03-01

    Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.

  3. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  4. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  5. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  6. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  7. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...

  8. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  9. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    Over the past year, several modifications have been made to the NEMS Transportation Model, incorporating greater levels of detail and analysis in modules previously represented in the aggregate or under a profusion of simplifying assumptions. This document is intended to amend those sections of the Model Documentation Report (MDR) which describe these superseded modules. Significant changes have been implemented in the LDV Fuel Economy Model, the Alternative Fuel Vehicle Model, the LDV Fleet Module, and the Highway Freight Model. The relevant sections of the MDR have been extracted from the original document, amended, and are presented in the following pages. A brief summary of the modifications follows: In the Fuel Economy Model, modifications have been made which permit the user to employ more optimistic assumptions about the commercial viability and impact of selected technological improvements. This model also explicitly calculates the fuel economy of an array of alternative fuel vehicles (AFV`s) which are subsequently used in the estimation of vehicle sales. In the Alternative Fuel Vehicle Model, the results of the Fuel Economy Model have been incorporated, and the program flows have been modified to reflect that fact. In the Light Duty Vehicle Fleet Module, the sales of vehicles to fleets of various size are endogenously calculated in order to provide a more detailed estimate of the impacts of EPACT legislation on the sales of AFV`s to fleets. In the Highway Freight Model, the previous aggregate estimation has been replaced by a detailed Freight Truck Stock Model, where travel patterns, efficiencies, and energy intensities are estimated by industrial grouping. Several appendices are provided at the end of this document, containing data tables and supplementary descriptions of the model development process which are not integral to an understanding of the overall model structure.

  10. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  11. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...

  12. Flood risk analysis model in the village of St. George/Danube Delta

    Science.gov (United States)

    Armas, I.; Dumitrascu, S.; Nistoran, D.

    2009-04-01

    River deltas may have been cradles for prehistoric civilizations (Day et al. 2007) and still represent favoured areas for human habitats on the basis of their high productivity, biodiversity and favourable economical conditions for river transport (Giosan and Bhattacharya 2005). In the same time, these regions are defined through their high vulnerability to environmental changes, being extremely susceptible to natural disasters, especially to floods. The Danube Delta, with an area of 5640 km2, is the largest ecosystem of the European humid zones. Its state reflects environmental conditions at both local and regional levels via liquid and solid parameters and has to ensure the water supply for the local economy and communities. Flooding of the delta is important for the dynamics of the entire natural system. Floods sustain both alluvial processes and the water supply to deltaic lakes. In addition, flooding frequency is important in flushing the deltaic lake system water, ensuring a normal evolution of both terrestrial and aquatic ecosystems. For human communities, on the other hand, floods are perceived as a risk factor, entailing material damage, human victims and psychological stress. In the perspective of risk assessment research, every populated place faces a certain risk engaged by a disaster, the size of which depends on the specific location, existent hazards, vulnerability and the number of elements at risk. Although natural hazards are currently a main subject of interest on a global scale, a unitary methodological approach has yet to be developed. In the general context of hazard analysis, there is the need to put more emphasis on the problem of the risk analysis. In most cases, it focuses only on an assessment of the probable material damage resulted from a specific risk scenario. Taking these matters into consideration, the aim of this study is to develop an efficient flood risk assessment methodology based on the example of the village of St. George in

  13. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  14. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  15. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  16. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  17. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  18. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    Science.gov (United States)

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  19. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  20. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  1. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  2. Population balance models: a useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2015-01-01

    Population balance models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by distributions. This distribution of properties under transient conditions has been demonstrated in many chemical engineering applications. Modelling...

  3. From Product Models to Product State Models

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...

  4. North American Carbon Project (NACP) Regional Model-Model and Model-Data Intercomparison Project

    Science.gov (United States)

    Huntzinger, D. N.; Post, W. M.; Jacobson, A. R.; Cook, R. B.

    2009-05-01

    Available observations are localized and widely separated in both space and time, so we depend heavily on models to characterize, understand, and predict carbon fluxes at regional and global scales. The results from each model differ because they use different approaches (forward vs. inverse), modeling strategies (detailed process, statistical, observation based), process representation, boundary conditions, initial conditions, and driver data. To investigate these differences we conducted a model-model and model-data comparison using available forward ecosystem model and atmospheric inverse output, along with regional scale inventory data. Forward or "bottom-up" models typically estimate carbon fluxes through a set of physiological relationships, and are based on our current mechanistic understanding of how carbon is exchanged within ecosystems. Inverse or "top-down" analyses use measured atmospheric concentrations of CO2, coupled with an atmospheric transport model to infer surface flux distributions. Although bottom-up models do fairly well at reproducing measured fluxes (i.e., net ecosystem exchange) at a given location, they vary considerably in their estimates of carbon flux over regional or continental scales, suggesting difficulty in scaling mechanistic relationships to large areas and/or timescales. Conversely, top-down inverse models predict fluxes that are quantitatively consistent with atmospheric measurements, suggesting that they are capturing large scale variability in flux quite well, but offer limited insights into the processes controlling this variability and how fluxes vary at fine spatial scales. The analyses focused on identifying and quantifying spatial and temporal patterns of carbon fluxes among the models; quantifying across-model variability, as well as comparing simulated or estimated surface fluxes and biomass to observed values at regional to continental scales for the period 2000-2005. The analysis focused on the following three

  5. Population Balance Models: A useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2014-01-01

    Population Balance Models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by statistical distributions. This has been demonstrated in many chemical engineering applications. Modelling efforts of several current and future unit...

  6. Model Metric untuk Mengukur Fleksibilitas Model Proses Bisnis

    Directory of Open Access Journals (Sweden)

    Endang Wahyu Pamungkas

    2014-10-01

    Full Text Available Abstrak Organisasi bisnis dunia saat ini banyak memanfaatkan sistem informasi digital untuk memberikan pemahaman mengenai manajemen proses bisnis yang mereka jalani. Pemanfaatan sistem Enterprise Resource Planning (ERP merupakan contoh teknologi dalam manajemen proses bisnis. Melalui sistem ini perusahaan dapat membangun dan mengembangkan proses bisnis. Selain itu, perusahaan juga dapat menyesuaikan proses bisnis secara cepat terhadap perubahan yang terjadi seiring bertambahnya kebutuhan dan informasi, berubahnya kondisi pasar, atau perubahan kebijakan. Sehubungan dengan perubahan proses bisnis yang sering terjadi, maka aspek fleksibilitas terhadap model proses yang dibangun harus ditingkatkan. Dalam mendukung peningkatan fleksibilitas tersebut tentunya dibutuhkan sebuah model untuk mengukur tingkat flesibelitas model proses bisnis. Model tersebut yang kemudian dapat digunakan oleh analis untuk melakukan perbandingan sehingga dapat diperoleh model proses bisnis yang paling fleksibel dan cocok dengan perusahaan. Hal ini dapat dianalisa dengan melibatkan aspek-aspek fleksibel yang telah diteliti pada penelitian-penelitian sebelumnya. Dalam paper ini akan dilakukan penelitian mengenai aspek fleksibitas dalam model proses bisnis untuk menghasilkan model metric yang dapat melakukan kuantifikasi tingkat fleksibilitas pada model proses bisnis. Model metric yang dihasilkan pada penelitian ini mampu melakukan perhitungan fleksibelitas pada model proses bisnis secara kuantitatif. Kata kunci: ERP, fleksibilitas, metadata, model metric, model proses bisnis, variasi Abstract Recently, business organizations in the world are making use of digital information systems to provide an understanding of the business process management in which they live. Utilization of Enterprise Resource Planning (ERP system is an example of technology in business process management. Through this system, some companies can build and develop business process and can quickly adjust

  7. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  8. The ModelCC Model-Driven Parser Generator

    Directory of Open Access Journals (Sweden)

    Fernando Berzal

    2015-01-01

    Full Text Available Syntax-directed translation tools require the specification of a language by means of a formal grammar. This grammar must conform to the specific requirements of the parser generator to be used. This grammar is then annotated with semantic actions for the resulting system to perform its desired function. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators. ModelCC receives a conceptual model as input, along with constraints that annotate it. It is then able to create a parser for the desired textual syntax and the generated parser fully automates the instantiation of the language conceptual model. ModelCC also includes a reference resolution mechanism so that ModelCC is able to instantiate abstract syntax graphs, rather than mere abstract syntax trees.

  9. Environmental Satellite Models for a Macroeconomic Model

    International Nuclear Information System (INIS)

    Moeller, F.; Grinderslev, D.; Werner, M.

    2003-01-01

    To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries

  10. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  11. Coupling Climate Models and Forward-Looking Economic Models

    Science.gov (United States)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  12. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption.

  13. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  14. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  15. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  16. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  17. Underground economy modelling: simple models with complicated dynamics

    OpenAIRE

    Albu, Lucian-Liviu

    2003-01-01

    The paper aims to model the underground economy using two different models: one based on the labor supply method and a generalized model for the allocation of time. The model based on the labor supply method is conceived as a simulating one in order to determine some reasonable thresholds of the underground sector extension based only on the available macroeconomic statistical data. The generalized model for the allocation of time is a model based on direct approach which estimates the underg...

  18. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  19. Modeling volatility using state space models.

    Science.gov (United States)

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  20. Document Models

    Directory of Open Access Journals (Sweden)

    A.A. Malykh

    2017-08-01

    Full Text Available In this paper, the concept of locally simple models is considered. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains of discourse can be described as locally simple models, for example, business models of enterprises and companies. Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, the retailer business model is formed from ”jobs”, and each ”job” can be modelled and automated more or less easily. At the same time, the whole retailer model as an integrated system is extremely complex. In this paper, we offer a variant of the mathematical definition of a locally simple model. This definition is intended for modelling a wide range of domains. Therefore, we also must take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract to our models as many people as possible, we need to hide this elitism behind some metaphor, to which ’ordinary’ people are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal - to make the documentary models executable. Executable models are models that can act as practical information systems in the described domain of discourse. Thus, if our model is executable, then programming becomes redundant. The direct use of a model, instead of its programming coding, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the possibilities for automation and

  1. An online model composition tool for system biology models.

    Science.gov (United States)

    Coskun, Sarp A; Cicek, A Ercument; Lai, Nicola; Dash, Ranjan K; Ozsoyoglu, Z Meral; Ozsoyoglu, Gultekin

    2013-09-05

    There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user's input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well.

  2. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  3. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  4. A unification of RDE model and XCDM model

    International Nuclear Information System (INIS)

    Liao, Kai; Zhu, Zong-Hong

    2013-01-01

    In this Letter, we propose a new generalized Ricci dark energy (NGR) model to unify Ricci dark energy (RDE) and XCDM. Our model can distinguish between RDE and XCDM by introducing a parameter β called weight factor. When β=1, NGR model becomes the usual RDE model. The XCDM model is corresponding to β=0. Moreover, NGR model permits the situation where neither β=1 nor β=0. We then perform a statefinder analysis on NGR model to see how β effects the trajectory on the r–s plane. In order to know the value of β, we constrain NGR model with latest observations including type Ia supernovae (SNe Ia) from Union2 set (557 data), baryonic acoustic oscillation (BAO) observation from the spectroscopic Sloan Digital Sky Survey (SDSS) data release 7 (DR7) galaxy sample and cosmic microwave background (CMB) observation from the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) results. With Markov Chain Monte Carlo (MCMC) method, the constraint result is β=0.08 −0.21 +0.30 (1σ) −0.28 +0.43 (2σ), which manifests the observations prefer a XCDM universe rather than RDE model. It seems RDE model is ruled out in NGR scenario within 2σ regions. Furthermore, we compare it with some of successful cosmological models using AIC information criterion. NGR model seems to be a good choice for describing the universe.

  5. Downscaling GISS ModelE Boreal Summer Climate over Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  6. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  7. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  8. ModelMate - A graphical user interface for model analysis

    Science.gov (United States)

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  9. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  10. Mineralogic Model (MM3.0) Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1

  11. Mineralogic Model (MM3.0) Analysis Model Report

    International Nuclear Information System (INIS)

    Lum, C.

    2002-01-01

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M and O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M and O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  12. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  13. Model documentation report: Short-Term Hydroelectric Generation Model

    International Nuclear Information System (INIS)

    1993-08-01

    The purpose of this report is to define the objectives of the Short- Term Hydroelectric Generation Model (STHGM), describe its basic approach, and to provide details on the model structure. This report is intended as a reference document for model analysts, users, and the general public. Documentation of the model is in accordance with the Energy Information Administration's (AYE) legal obligation to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). The STHGM performs a short-term (18 to 27- month) forecast of hydroelectric generation in the United States using an autoregressive integrated moving average (UREMIA) time series model with precipitation as an explanatory variable. The model results are used as input for the short-term Energy Outlook

  14. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  15. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  16. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  17. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  18. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    Science.gov (United States)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  19. The Bond Fluctuation Model and Other Lattice Models

    Science.gov (United States)

    Müller, Marcus

    Lattice models constitute a class of coarse-grained representations of polymeric materials. They have enjoyed a longstanding tradition for investigating the universal behavior of long chain molecules by computer simulations and enumeration techniques. A coarse-grained representation is often necessary to investigate properties on large time- and length scales. First, some justification for using lattice models will be given and the benefits and limitations will be discussed. Then, the bond fluctuation model by Carmesin and Kremer [1] is placed into the context of other lattice models and compared to continuum models. Some specific techniques for measuring the pressure in lattice models will be described. The bond fluctuation model has been employed in more than 100 simulation studies in the last decade and only few selected applications can be mentioned.

  20. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  1. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  2. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  3. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  4. Process models and model-data fusion in dendroecology

    Directory of Open Access Journals (Sweden)

    Joel eGuiot

    2014-08-01

    Full Text Available Dendrochronology (i.e. the study of annually dated tree-ring time series has proved to be a powerful technique to understand tree-growth. This paper intends to show the interest of using ecophysiological modeling not only to understand and predict tree-growth (dendroecology but also to reconstruct past climates (dendroclimatology. Process models have been used for several decades in dendroclimatology, but it is only recently that methods of model-data fusion have led to significant progress in modeling tree-growth as a function of climate and in reconstructing past climates. These model-data fusion (MDF methods, mainly based on the Bayesian paradigm, have been shown to be powerful for both model calibration and model inversion. After a rapid survey of tree-growth modeling, we illustrate MDF with examples based on series of Southern France Aleppo pines and Central France oaks. These examples show that if plants experienced CO2 fertilization, this would have a significant effect on tree-growth which in turn would bias the climate reconstructions. This bias could be extended to other environmental non-climatic factors directly or indirectly affecting annual ring formation and not taken into account in classical empirical models, which supports the use of more complex process-based models. Finally, we conclude by showing the interest of the data assimilation methods applied in climatology to produce climate re-analyses.

  5. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    Science.gov (United States)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  6. Eclipse models

    International Nuclear Information System (INIS)

    Michel, F.C.

    1989-01-01

    Three existing eclipse models for the PSR 1957 + 20 pulsar are discussed in terms of their requirements and the information they yield about the pulsar wind: the interacting wind from a companion model, the magnetosphere model, and the occulting disk model. It is shown out that the wind model requires an MHD wind from the pulsar, with enough particles that the Poynting flux of the wind can be thermalized; in this model, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model requires an EM wind, which is Poynting flux dominated; the advantage of this model over the wind model is that the plasma density inside the magnetosphere can be orders of magnitude larger than in a magnetospheric tail blown back by wind interaction. The occulting disk model also requires an EM wind so that the interaction would be pushed down onto the companion surface, minimizing direct interaction of the wind with the orbiting macroscopic particles

  7. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  8. Using the Model Coupling Toolkit to couple earth system models

    Science.gov (United States)

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  9. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  10. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    Science.gov (United States)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  11. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  12. PENGGUNAAN THE ZMIJEWSKI MODEL, THE ALTMAN MODEL, DAN THE SPRINGATE MODEL SEBAGAI PREDIKTOR DELISTING

    Directory of Open Access Journals (Sweden)

    Mila Fatmawati

    2017-03-01

    Full Text Available The purpose of this study was to investigate empirical evidence that the Zmijewski model, the Altman model, andthe Springate models could be used as a predictor of delisting the company. Object of this study was to remove thelist of companies that trade shares (delisted in Indonesia Stock Exchange in 2003-2009. As a benchmark forcompanies delisted at the top used companies that were still listed on the Stock Exchange with the same numberand kind of business field. Comparison samples were taken randomly over the same period with the companydelisted. The method of analysis used logic regression. The results found that from the three delisting of predictormodels, only the Zmijewski models that could be used to predict the company delisted in the period of observation,while the Altman model and the Springate models could not be used as predictive models delisting. It is becauseThe Zmijewski model emphasized amounts of debt in predict delisting. The bigger the debt was, it would be moreaccurate in predicting as the company’s delisting. Meanwhile, the Altman model and the Springate modelemphasized more on profitability measures. The smaller the profitability was, the more precisely to predictcompany’s delisting. Condition of delisting the company that became object of observation company trends wasstill able to get profit, but it had a relative amount of debt.

  13. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  14. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  15. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  16. Modelling MIZ dynamics in a global model

    Science.gov (United States)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  17. Graphical Rasch models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...

  18. Transforming Graphical System Models to Graphical Attack Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2016-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....

  19. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  20. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  1. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  2. Model coupler for coupling of atmospheric, oceanic, and terrestrial models

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Kobayashi, Takuya; Tsuduki, Katsunori; Kim, Keyong-Ok

    2007-02-01

    A numerical simulation system SPEEDI-MP, which is applicable for various environmental studies, consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, meteorological and geographical databases for model inputs, and system utilities for file management, visualization, analysis, etc., using graphical user interfaces (GUIs). As a numerical simulation tool, a model coupling program (model coupler) has been developed. It controls parallel calculations of several models and data exchanges among them to realize the dynamical coupling of the models. It is applicable for any models with three-dimensional structured grid system, which is used by most environmental and hydrodynamic models. A coupled model system for water circulation has been constructed with atmosphere, ocean, wave, hydrology, and land-surface models using the model coupler. Performance tests of the coupled model system for water circulation were also carried out for the flood event at Saudi Arabia in January 2005 and the storm surge case by the hurricane KATRINA in August 2005. (author)

  3. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  4. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  5. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  6. On the shell model connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.; Levai, G.; Kato, K.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known, but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model, in which not only the cluster model space is obtained from the full shell model space by an SU(3) symmetry-dictated truncation, but SU(3) dynamically symmetric interactions are also applied. Actually, Hamiltonians of this kind proved to be successful in describing the gross features of cluster states in a wide energy range. The novel feature of the present work is that we apply exclusively shell model interactions. The energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental

  7. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  8. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  9. Modeling arson - An exercise in qualitative model building

    Science.gov (United States)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  10. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  11. On the shell-model-connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago [1] as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known [2] but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model [3,4] in order to find an answer to this question, which seems to be affirmative. In particular, the energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental values. The present results show that the simple and transparent SU(3) connection between the spherical shell model and the cluster model is valid not only for the harmonic oscillator interactions, but for much more general (SU(3) dynamically symmetric) Hamiltonians as well, which result in realistic energy spectra. Via

  12. Wind tunnel modeling of roadways: Comparison with mathematical models

    International Nuclear Information System (INIS)

    Heidorn, K.; Davies, A.E.; Murphy, M.C.

    1991-01-01

    The assessment of air quality impacts from roadways is a major concern to urban planners. In order to assess future road and building configurations, a number of techniques have been developed including mathematical models, which simulate traffic emissions and atmospheric dispersion through a series of mathematical relationships and physical models. The latter models simulate emissions and dispersion through scaling of these processes in a wind tunnel. Two roadway mathematical models, HIWAY-2 and CALINE-4, were applied to a proposed development in a large urban area. Physical modeling procedures developed by Rowan Williams Davies and Irwin Inc. (RWDI) in the form of line source simulators were also applied, and the resulting carbon monoxide concentrations were compared. The results indicated a factor of two agreement between the mathematical and physical models. The physical model, however, reacted to change in building massing and configuration. The mathematical models did not, since no provision for such changes was included in the mathematical models. In general, the RWDI model resulted in higher concentrations than either HIWAY-2 or CALINE-4. Where there was underprediction, it was often due to shielding of the receptor by surrounding buildings. Comparison of these three models with the CALTRANS Tracer Dispersion Experiment showed good results although concentrations were consistently underpredicted

  13. Analysis of deregulation models; Denryoku shijo jiyuka model no bunseki

    Energy Technology Data Exchange (ETDEWEB)

    Yajima, M. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1996-04-01

    Trends toward power market deregulation were investigated in Japan and 16 other countries, and various deregulation models were examined and evaluated for their merits and demerits. There are four basic models, that is, franchise bidding model, competitive bidding in power generation model, wholesale wheeling or retail wheeling model, and mandatory pool or voluntary pool model. Power market deregulation has been a global tendency since the second half of the 1970s, with various models adopted by different countries. Out of the above-said models, it is the retail wheeling model and pool models (open access models) that allow the final customer to select power suppliers, and the number of countries adopting these models is increasing. The said models are characterized in that the disintegration of the vertical transmission-distribution integration (separation of distribution service and retail supply service) and the liberation of the retail market are simultaneously accomplished. The pool models, in particular, are enjoying favor because conditions for fair competition have already been prepared and because it is believed high in efficiency. In Japan and France, where importance is attached to atomic power generation, the competitive bidding model is adopted as a means to harmonize the introduction of competition into the source development and power generation sectors. 7 refs., 4 tabs.

  14. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise s...

  15. Statistical Model Checking of Rich Models and Properties

    DEFF Research Database (Denmark)

    Poulsen, Danny Bøgsted

    in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part...... motivates why existing model checking technology should be supplemented by new techniques. It also contains a brief introduction to probability theory and concepts covered by the six papers making up the second part. The first two papers are concerned with developing online monitoring techniques...... systems. The fifth paper shows how stochastic hybrid automata are useful for modelling biological systems and the final paper is concerned with showing how statistical model checking is efficiently distributed. In parallel with developing the theory contained in the papers, a substantial part of this work...

  16. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  17. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  18. Assessing physical models used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Computer codes used to predict the behaviour of aerosols in water-cooled reactor containment buildings after severe accidents contain a variety of physical models. Special models are in place for describing agglomeration processes where small aerosol particles combine to form larger ones. Other models are used to calculate the rates at which aerosol particles are deposited on building structures. Condensation of steam on aerosol particles is currently a very active area in aerosol modelling. In this paper, the physical models incorporated in the current available international codes for all of these processes are reviewed and documented. There is considerable variation in models used in different codes, and some uncertainties exist as to which models are superior. 28 refs

  19. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  20. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  1. EIA model documentation: Petroleum market model of the national energy modeling system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.

  2. EIA model documentation: Petroleum market model of the national energy modeling system

    International Nuclear Information System (INIS)

    1995-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA's legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level

  3. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  4. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  5. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  6. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    Science.gov (United States)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  7. Deformed baryons: constituent quark model vs. bag model

    International Nuclear Information System (INIS)

    Iwamura, Y.; Nogami, Y.

    1985-01-01

    Recently Bhaduri et al. developed a nonrelativistic constituent quark model for deformed baryons. In that model the quarks move in a deformable mean field, and the deformation parameters are determined by minimizing the quark energy subject to the constraint of volume conservation. This constraint is an ad hoc assumption. It is shown that, starting with a bag model, a model similar to that of Bhaduri et al. can be constructed. The deformation parameters are determined by the pressure balance on the bag surface. There is, however, a distinct difference between the two models with respect to the state dependence of the ''volume''. Implications of this difference are discussed

  8. Business Model Innovation: How Iconic Business Models Emerge

    OpenAIRE

    Mikhalkina, T.; Cabantous, L.

    2015-01-01

    Despite ample research on the topic of business model innovation, little is known about the cognitive processes whereby some innovative business models gain the status of iconic representations of particular types of firms. This study addresses the question: How do iconic business models emerge? In other words: How do innovative business models become prototypical exemplars for new categories of firms? We focus on the case of Airbnb, and analyze how six mainstream business media publications ...

  9. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  10. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  11. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  12. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  13. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  14. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  15. Multistate Model Builder (MSMB): a flexible editor for compact biochemical models.

    Science.gov (United States)

    Palmisano, Alida; Hoops, Stefan; Watson, Layne T; Jones, Thomas C; Tyson, John J; Shaffer, Clifford A

    2014-04-04

    Building models of molecular regulatory networks is challenging not just because of the intrinsic difficulty of describing complex biological processes. Writing a model is a creative effort that calls for more flexibility and interactive support than offered by many of today's biochemical model editors. Our model editor MSMB - Multistate Model Builder - supports multistate models created using different modeling styles. MSMB provides two separate advances on existing network model editors. (1) A simple but powerful syntax is used to describe multistate species. This reduces the number of reactions needed to represent certain molecular systems, thereby reducing the complexity of model creation. (2) Extensive feedback is given during all stages of the model creation process on the existing state of the model. Users may activate error notifications of varying stringency on the fly, and use these messages as a guide toward a consistent, syntactically correct model. MSMB default values and behavior during model manipulation (e.g., when renaming or deleting an element) can be adapted to suit the modeler, thus supporting creativity rather than interfering with it. MSMB's internal model representation allows saving a model with errors and inconsistencies (e.g., an undefined function argument; a syntactically malformed reaction). A consistent model can be exported to SBML or COPASI formats. We show the effectiveness of MSMB's multistate syntax through models of the cell cycle and mRNA transcription. Using multistate reactions reduces the number of reactions need to encode many biochemical network models. This reduces the cognitive load for a given model, thereby making it easier for modelers to build more complex models. The many interactive editing support features provided by MSMB make it easier for modelers to create syntactically valid models, thus speeding model creation. Complete information and the installation package can be found at http

  16. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  17. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  18. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  19. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  20. A simplified model exploration research of new anisotropic diffuse radiation model

    International Nuclear Information System (INIS)

    Yao, Wanxiang; Li, Zhengrong; Wang, Xiao; Zhao, Qun; Zhang, Zhigang; Lin, Lin

    2016-01-01

    Graphical abstract: The specific process of measured diffuse radiation data. - Highlights: • Simplified diffuse radiation model is extremely important for solar radiation simulation and energy simulation. • A new simplified anisotropic diffuse radiation model (NSADR model) is proposed. • The accuracy of existing models and NSADR model is compared based on the measured values. • The accuracy of the NSADR model is higher than that of the existing models, and suitable for calculating diffuse radiation. - Abstract: More accurate new anisotropic diffuse radiation model (NADR model) has been proposed, but the parameters and calculation process of NADR model used in the process are complex. So it is difficult to widely used in the simulation software and engineering calculation. Based on analysis of the diffuse radiation model and measured diffuse radiation data, this paper put forward three hypotheses: (1) diffuse radiation from sky horizontal region is concentrated in a very thin layer which is close to the line source; (2) diffuse radiation from circumsolar region is concentrated in the point of the sun; (3) diffuse radiation from orthogonal region is concentrated in the point located at 90 degree angles with the Sun. Based on these hypotheses, NADR model is simplified to a new simplified anisotropic diffuse radiation model (NSADR model). Then the accuracy of NADR model and its simplified model (NSADR model) are compared with existing models based on the measured values, and the result shows that Perez model and its simplified model are relatively accurate among existing models. However, the accuracy of these two models is lower than the NADR model and NSADR model due to neglect the influence of the orthogonal diffuse radiation. The accuracy of the NSADR model is higher than that of the existing models, meanwhile, another advantage is that the NSADR model simplifies the process of solution parameters and calculation. Therefore it is more suitable for

  1. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  2. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  3. Biosphere Model Report

    International Nuclear Information System (INIS)

    D. W. Wu

    2003-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  4. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  5. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  7. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  8. Modelling of JET diagnostics using Bayesian Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J. [IPP Greifswald, Greifswald (Germany); Ford, O. [Imperial College, London (United Kingdom); McDonald, D.; Hole, M.; Nessi, G. von; Meakins, A.; Brix, M.; Thomsen, H.; Werner, A.; Sirinelli, A.

    2011-07-01

    The mapping between physics parameters (such as densities, currents, flows, temperatures etc) defining the plasma 'state' under a given model and the raw observations of each plasma diagnostic will 1) depend on the particular physics model used, 2) is inherently probabilistic, from uncertainties on both observations and instrumental aspects of the mapping, such as calibrations, instrument functions etc. A flexible and principled way of modelling such interconnected probabilistic systems is through so called Bayesian graphical models. Being an amalgam between graph theory and probability theory, Bayesian graphical models can simulate the complex interconnections between physics models and diagnostic observations from multiple heterogeneous diagnostic systems, making it relatively easy to optimally combine the observations from multiple diagnostics for joint inference on parameters of the underlying physics model, which in itself can be represented as part of the graph. At JET about 10 diagnostic systems have to date been modelled in this way, and has lead to a number of new results, including: the reconstruction of the flux surface topology and q-profiles without any specific equilibrium assumption, using information from a number of different diagnostic systems; profile inversions taking into account the uncertainties in the flux surface positions and a substantial increase in accuracy of JET electron density and temperature profiles, including improved pedestal resolution, through the joint analysis of three diagnostic systems. It is believed that the Bayesian graph approach could potentially be utilised for very large sets of diagnostics, providing a generic data analysis framework for nuclear fusion experiments, that would be able to optimally utilize the information from multiple diagnostics simultaneously, and where the explicit graph representation of the connections to underlying physics models could be used for sophisticated model testing. This

  9. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  10. Business Models and Business Model Innovation

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Saebi, Tina

    2018-01-01

    While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...

  11. The bumper bundle book of modelling NLP modelling made simple

    CERN Document Server

    Burgess, Fran

    2014-01-01

    A Neurolinguistic Programming textbook which focusses on the core activity of NLP - modelling. It covers the thinking behind NLP modelling, presents an extensive range of modelling methodologies and skills, offers applications of modelling, and provides specific details for model and technique construction.

  12. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  13. Ecological models and pesticide risk assessment: current modeling practice.

    Science.gov (United States)

    Schmolke, Amelie; Thorbek, Pernille; Chapman, Peter; Grimm, Volker

    2010-04-01

    Ecological risk assessments of pesticides usually focus on risk at the level of individuals, and are carried out by comparing exposure and toxicological endpoints. However, in most cases the protection goal is populations rather than individuals. On the population level, effects of pesticides depend not only on exposure and toxicity, but also on factors such as life history characteristics, population structure, timing of application, presence of refuges in time and space, and landscape structure. Ecological models can integrate such factors and have the potential to become important tools for the prediction of population-level effects of exposure to pesticides, thus allowing extrapolations, for example, from laboratory to field. Indeed, a broad range of ecological models have been applied to chemical risk assessment in the scientific literature, but so far such models have only rarely been used to support regulatory risk assessments of pesticides. To better understand the reasons for this situation, the current modeling practice in this field was assessed in the present study. The scientific literature was searched for relevant models and assessed according to nine characteristics: model type, model complexity, toxicity measure, exposure pattern, other factors, taxonomic group, risk assessment endpoint, parameterization, and model evaluation. The present study found that, although most models were of a high scientific standard, many of them would need modification before they are suitable for regulatory risk assessments. The main shortcomings of currently available models in the context of regulatory pesticide risk assessments were identified. When ecological models are applied to regulatory risk assessments, we recommend reviewing these models according to the nine characteristics evaluated here. (c) 2010 SETAC.

  14. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  15. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  16. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    Directory of Open Access Journals (Sweden)

    Anna Gomes

    Full Text Available Gentamicin shows large variations in half-life and volume of distribution (Vd within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1 creating an optimal model for endocarditis patients; and 2 assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE and Median Absolute Prediction Error (MDAPE were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358 renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076 L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68% as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37% and standard (MDPE -0.90%, MDAPE 4.82% models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to

  17. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  18. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  19. Methodology for geometric modelling. Presentation and administration of site descriptive models; Metodik foer geometrisk modellering. Presentation och administration av platsbeskrivande modeller

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan [Golder Associates (Sweden)

    2001-03-01

    This report presents a methodology to construct, visualise and present geoscientific descriptive models based on data from the site investigations, which the SKB currently performs, to build an underground nuclear waste disposal facility in Sweden. It is designed for interaction with SICADA (SKB:s site characterisation database) and RVS (SKB:s Rock Visualisation System). However, the concepts of the methodology are general and can be used with other tools capable of handling 3D geometries and parameters. The descriptive model is intended to be an instrument where site investigation data from all disciplines are put together to form a comprehensive visual interpretation of the studied rock mass. The methodology has four main components: 1. Construction of a geometrical model of the interpreted main structures at the site. 2. Description of the geoscientific characteristics of the structures. 3. Description and geometrical implementation of the geometric uncertainties in the interpreted model structures. 4. Quality system for the handling of the geometrical model, its associated database and some aspects of the technical auditing. The geometrical model forms a basis for understanding the main elements and structures of the investigated site. Once the interpreted geometries are in place in the model, the system allows for adding descriptive and quantitative data to each modelled object through a system of intuitive menus. The associated database allows each geometrical object a complete quantitative description of all geoscientific disciplines, variabilities, uncertainties in interpretation and full version history. The complete geometrical model and its associated database of object descriptions are to be recorded in a central quality system. Official, new and old versions of the model are administered centrally in order to have complete quality assurance of each step in the interpretation process. The descriptive model is a cornerstone in the understanding of the

  20. Coupled model of INM-IO global ocean model, CICE sea ice model and SCM OIAS framework

    Science.gov (United States)

    Bayburin, Ruslan; Rashit, Ibrayev; Konstantin, Ushakov; Vladimir, Kalmykov; Gleb, Dyakonov

    2015-04-01

    Status of coupled Arctic model of ocean and sea ice is presented. Model consists of INM IO global ocean component of high resolution, Los Alamos National Laboratory CICE sea ice model and a framework SCM OIAS for the ocean-ice-atmosphere-land coupled modeling on massively-parallel architectures. Model is currently under development at the Institute of Numerical Mathematics (INM), Hydrometeorological Center (HMC) and P.P. Shirshov Institute of Oceanology (IO). Model is aimed at modeling of intra-annual variability of hydrodynamics in Arctic and. The computational characteristics of the world ocean-sea ice coupled model governed by SCM OIAS are presented. The model is parallelized using MPI technologies and currently can use efficiently up to 5000 cores. Details of programming implementation, computational configuration and physical phenomena parametrization are analyzed in terms of intercoupling complex. Results of five year computational experiment of sea ice, snow and ocean state evolution in Arctic region on tripole grid with horizontal resolution of 3-5 kilometers, closed by atmospheric forcing field from repeating "normal" annual course taken from CORE1 experiment data base are presented and analyzed in terms of the state of vorticity and warm Atlantic water expansion.

  1. Models of breast cancer: quo vadis, animal modeling?

    International Nuclear Information System (INIS)

    Wagner, Kay-Uwe

    2004-01-01

    Rodent models for breast cancer have for many decades provided unparalleled insights into cellular and molecular aspects of neoplastic transformation and tumorigenesis. Despite recent improvements in the fidelity of genetically engineered mice, rodent models are still being criticized by many colleagues for not being 'authentic' enough to the human disease. Motives for this criticism are manifold and range from a very general antipathy against the rodent model system to well-founded arguments that highlight physiological variations between species. Newly proposed differences in genetic pathways that cause cancer in humans and mice invigorated the ongoing discussion about the legitimacy of the murine system to model the human disease. The present commentary intends to stimulate a debate on this subject by providing the background about new developments in animal modeling, by disputing suggested limitations of genetically engineered mice, and by discussing improvements but also ambiguous expectations on the authenticity of xenograft models to faithfully mimic the human disease

  2. MeSH: a window into full text for document summarization.

    Science.gov (United States)

    Bhattacharya, Sanmitra; Ha-Thuc, Viet; Srinivasan, Padmini

    2011-07-01

    Previous research in the biomedical text-mining domain has historically been limited to titles, abstracts and metadata available in MEDLINE records. Recent research initiatives such as TREC Genomics and BioCreAtIvE strongly point to the merits of moving beyond abstracts and into the realm of full texts. Full texts are, however, more expensive to process not only in terms of resources needed but also in terms of accuracy. Since full texts contain embellishments that elaborate, contextualize, contrast, supplement, etc., there is greater risk for false positives. Motivated by this, we explore an approach that offers a compromise between the extremes of abstracts and full texts. Specifically, we create reduced versions of full text documents that contain only important portions. In the long-term, our goal is to explore the use of such summaries for functions such as document retrieval and information extraction. Here, we focus on designing summarization strategies. In particular, we explore the use of MeSH terms, manually assigned to documents by trained annotators, as clues to select important text segments from the full text documents. Our experiments confirm the ability of our approach to pick the important text portions. Using the ROUGE measures for evaluation, we were able to achieve maximum ROUGE-1, ROUGE-2 and ROUGE-SU4 F-scores of 0.4150, 0.1435 and 0.1782, respectively, for our MeSH term-based method versus the maximum baseline scores of 0.3815, 0.1353 and 0.1428, respectively. Using a MeSH profile-based strategy, we were able to achieve maximum ROUGE F-scores of 0.4320, 0.1497 and 0.1887, respectively. Human evaluation of the baselines and our proposed strategies further corroborates the ability of our method to select important sentences from the full texts. sanmitra-bhattacharya@uiowa.edu; padmini-srinivasan@uiowa.edu.

  3. Crecimiento y mortalidad natural del pez Haemulon aurolineatum (Teleostei: Haemulidae del suroeste de la isla de Margarita, Venezuela

    Directory of Open Access Journals (Sweden)

    Edwis Bravo

    2009-09-01

    Full Text Available Se determinó el crecimiento y la mortalidad natural del cují (Haemulon aurolineatum del suroeste de la isla de Margarita, entre julio 2005 hasta junio 2006, para lo cual se analizó una muestra de 2 541 ejemplares colectados de la pesca artesanal de Boca del Río. La relación talla-peso entre machos y hembras no mostró diferencias significativas en las pendientes "b" (p>0.05, ts = -1.69 ni en los interceptos "a" (p>0.05, ts = -1.01, por lo que se estableció una relación común para ambos sexos, expresada mediante el modelo P = 0.038*LT2.87. A partir de los datos de distribución de frecuencia de tallas se estimó la longitud asintótica (L∞ con la rutina de Powell-Wetherall, y el coeficiente de crecimiento (k con la rutina ELEFAN I (Gayanilo et al.1996. Se empleó el análisis de la progresión modal, previa descomposición de la frecuencia de longitudes de acuerdo al método de Bhattacharya (1967, y se optimizaron las estimaciones de L∞ y k según Gulland y Holt (1959. Los parámetros de crecimiento estimados (L∞ = 24.2 cm y k = 0.48 año-1 mostraron un crecimiento moderadamente rápido. Los datos de frecuencias de longitudes fueron ajustados al modelo de von Bertalanffy (1960, mostrando una tendencia de tipo exponencial, donde se observó un crecimiento acelerado hasta los 2 años de edad, que luego se hizo lento a medida que el pez alcanzó la longitud máxima. La tasa de mortalidad natural fue alta (M = 1.15 año-1, probablemente causada por alta depredación.

  4. Spectral discrimination of giant reed (Arundo donax L.): A seasonal study in riparian areas

    Science.gov (United States)

    Fernandes, Maria Rosário; Aguiar, Francisca C.; Silva, João M. N.; Ferreira, Maria Teresa; Pereira, José M. C.

    2013-06-01

    The giant reed (Arundo donax L.) is amongst the one hundred worst invasive alien species of the world, and it is responsible for biodiversity loss and failure of ecosystem functions in riparian habitats. In this work, field spectroradiometry was used to assess the spectral separability of the giant reed from the adjacent vegetation and from the common reed, a native similar species. The study was conducted at different phenological periods and also for the giant reed stands regenerated after mechanical cutting (giant reed_RAC). A hierarchical procedure using Kruskal-Wallis test followed by Classification and Regression Trees (CART) was used to select the minimum number of optimal bands that discriminate the giant reed from the adjacent vegetation. A new approach was used to identify sets of wavelengths - wavezones - that maximize the spectral separability beyond the minimum number of optimal bands. Jeffries Matusita and Bhattacharya distance were used to evaluate the spectral separability using the minimum optimal bands and in three simulated satellite images, namely Landsat, IKONOS and SPOT. Giant reed was spectrally separable from the adjacent vegetation, both at the vegetative and the senescent period, exception made to the common reed at the vegetative period. The red edge region was repeatedly selected, although the visible region was also important to separate the giant reed from the herbaceous vegetation and the mid infrared region to the discrimination from the woody vegetation. The highest separability was obtained for the giant reed_RAC stands, due to its highly homogeneous, dense and dark-green stands. Results are discussed by relating the phenological, morphological and structural features of the giant reed stands and the adjacent vegetation with their optical traits. Weaknesses and strengths of the giant reed spectral discrimination are highlighted and implications of imagery selection for mapping purposes are argued based on present results.

  5. Crecimiento y mortalidad del pez Haemulon aurolineatum (Teleostei: Haemulidae en el suroeste de la isla de Margarita, Venezuela

    Directory of Open Access Journals (Sweden)

    Edwis Bravo

    2009-06-01

    Full Text Available Se determinó el crecimiento y la mortalidad natural del pez cují (Haemulon aurolineatum del suroeste de la Isla de Margarita (julio 2005 -junio 2006, para lo cual se analizó una muestra constituida por 2 541 ejemplares recolectados de la pesca artesanal de Boca del Río. La relación talla-peso entre machos y hembras no mostró diferencias significativas en las pendientes "b" (p>0.05; ts=-1.69 ni en los interceptos "a" (p>0.05; ts=-1.01, por lo que se estableció una relación común para ambos sexos: P=0.038*LT2.87. A partir de los datos de distribución de frecuencia de tallas se estimó la longitud asintótica ( aplicando la rutina de Powell-Wetherall, y el coeficiente de crecimiento (k a través de la rutina ELEFAN I (Gayanilo et al. 1996. Posteriormente se empleó el análisis de la progresión modal, previa descomposición de la frecuencia de longitudes de acuerdo al método de Bhattacharya (1967, y se optimizaron las estimaciones de y k según el procedimiento de Gulland y Holt (1959. Los parámetros de crecimiento estimados (L8=24.2 cm y k=0.48 año-1 mostraron un crecimiento moderadamente rápido. Los datos de frecuencias de longitudes fueron ajustados al modelo de von Bertalanffy (1960, indicando una tendencia de tipo exponencial: crecimiento acelerado hasta los 2 años de edad, que luego se hizo lento hasta que el pez alcanzó la longitud máxima. La tasa de mortalidad natural fue alta (M=1.15 año-1, probablemente por alta depredación.

  6. EVOLUÇÃO DO USO E COBERTURA DO SOLO NA BACIA HIDROGRÁFICA DO RIO DOURADOS-MS, BRASIL

    Directory of Open Access Journals (Sweden)

    Geula Graciela Gomes Gonçalves

    2011-01-01

    Full Text Available Os sistemas de informações geográficas (SIGs, aliados aos dados de sensores remotos são importantes ferramentas na identificação e avaliação do uso da terra com vistas ao manejo de bacias. A Bacia do Rio Dourados, no Mato Grosso do Sul possui grande importância no estado por sua extensão e uso, por essa razão destaca-se a necessidade de pesquisas que visem um planejamento adequado do uso da terra, objetivando a conservação dos recursos ambientais a fim de manter a qualidade e quantidade de água, tornando sustentável o uso agrícola e humano desse manancial. Este trabalho teve o objetivo de mapear uso da terra na Bacia do Rio Dourados em duas épocas, 2001 e 2008, com o objetivo de avaliar a evolução do uso da terra. Para classificação da imagem, utilizou-se o sistema de classificação supervisionado por regiões, classificador Bhattacharya, implementado no SIG-SPRING/INPE. Os resultados permitiram concluir que: a O crescimento das áreas de agricultura foi compatível com o crescimento da cultura da cana-de-açúcar na região; b a redução das áreas de pastagem aconteceu em função da expansão da cultura canavieira; c as feições vegetação (cerrado, mata, capoeira e complexo de vegetação (várzeas, apresentaram grande potencial para confusão de identificação na área da bacia analisada; e d o crescimento das áreas com eucalipto foi compatível com a tendência em todo o Estado de Mato Grosso do Sul.

  7. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  8. Empirical investigation on modeling solar radiation series with ARMA–GARCH models

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Yan, Dong; Zhao, Na; Zhou, Jianzhong

    2015-01-01

    Highlights: • Apply 6 ARMA–GARCH(-M) models to model and forecast solar radiation. • The ARMA–GARCH(-M) models produce more accurate radiation forecasting than conventional methods. • Show that ARMA–GARCH-M models are more effective for forecasting solar radiation mean and volatility. • The ARMA–EGARCH-M is robust and the ARMA–sGARCH-M is very competitive. - Abstract: Simulation of radiation is one of the most important issues in solar utilization. Time series models are useful tools in the estimation and forecasting of solar radiation series and their changes. In this paper, the effectiveness of autoregressive moving average (ARMA) models with various generalized autoregressive conditional heteroskedasticity (GARCH) processes, namely ARMA–GARCH models are evaluated for their effectiveness in radiation series. Six different GARCH approaches, which contain three different ARMA–GARCH models and corresponded GARCH in mean (ARMA–GARCH-M) models, are applied in radiation data sets from two representative climate stations in China. Multiple evaluation metrics of modeling sufficiency are used for evaluating the performances of models. The results show that the ARMA–GARCH(-M) models are effective in radiation series estimation. Both in fitting and prediction of radiation series, the ARMA–GARCH(-M) models show better modeling sufficiency than traditional models, while ARMA–EGARCH-M models are robustness in two sites and the ARMA–sGARCH-M models appear very competitive. Comparisons of statistical diagnostics and model performance clearly show that the ARMA–GARCH-M models make the mean radiation equations become more sufficient. It is recommended the ARMA–GARCH(-M) models to be the preferred method to use in the modeling of solar radiation series

  9. Hydrogeological conceptual model development and numerical modelling using CONNECTFLOW, Forsmark modelling stage 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven (SF GeoLogic AB, Taeby (Sweden)); Hartley, Lee; Jackson, Peter; Roberts, David (Serco TAP (United Kingdom)); Marsic, Niko (Kemakta Konsult AB, Stockholm (Sweden))

    2008-05-15

    Three versions of a site descriptive model (SDM) have been completed for the Forsmark area. Version 0 established the state of knowledge prior to the start of the site investigation programme. Version 1.1 was essentially a training exercise and was completed during 2004. Version 1.2 was a preliminary site description and concluded the initial site investigation work (ISI) in June 2005. Three modelling stages are planned for the complete site investigation work (CSI). These are labelled stage 2.1, 2.2 and 2.3, respectively. An important component of each of these stages is to address and continuously try to resolve discipline-specific uncertainties of importance for repository engineering and safety assessment. Stage 2.1 included an updated geological model for Forsmark and aimed to provide a feedback from the modelling working group to the site investigation team to enable completion of the site investigation work. Stage 2.2 described the conceptual understanding and the numerical modelling of the bedrock hydrogeology in the Forsmark area based on data freeze 2.2. The present report describes the modelling based on data freeze 2.3, which is the final data freeze in Forsmark. In comparison, data freeze 2.3 is considerably smaller than data freeze 2.2. Therefore, stage 2.3 deals primarily with model confirmation and uncertainty analysis, e.g. verification of important hypotheses made in stage 2.2 and the role of parameter uncertainty in the numerical modelling. On the whole, the work reported here constitutes an addendum to the work reported in stage 2.2. Two changes were made to the CONNECTFLOW code in stage 2.3. These serve to: 1) improve the representation of the hydraulic properties of the regolith, and 2) improve the conditioning of transmissivity of the deformation zones against single-hole hydraulic tests. The changes to the modelling of the regolith were made to improve the consistency with models made with the MIKE SHE code, which involved the introduction

  10. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  11. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    International Nuclear Information System (INIS)

    1996-01-01

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues

  12. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.

  13. Rotating universe models

    International Nuclear Information System (INIS)

    Tozini, A.V.

    1984-01-01

    A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt

  14. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  15. Nonintersecting string model and graphical approach: equivalence with a Potts model

    International Nuclear Information System (INIS)

    Perk, J.H.H.; Wu, F.Y.

    1986-01-01

    Using a graphical method the authors establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced lattice with that of a q 2 -state Potts model on a relaxed lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagome lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q greater than 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model the authors discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases

  16. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  18. Simulation Model of Membrane Gas Separator Using Aspen Custom Modeler

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong-keun [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of); Shin, Gahui; Yun, Jinwon; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-12-15

    Membranes are used to separate pure gas from gas mixtures. In this study, three different types of mass transport through a membrane were developed in order to investigate the gas separation capabilities of a membrane. The three different models typically used are a lumped model, a multi-cell model, and a discretization model. Despite the multi-cell model producing similar results to a discretization model, the discretization model was selected for this investigation, due to the cell number dependence of a multi-cell model. The mass transport model was then used to investigate the effects of pressure difference, flow rate, total exposed area, and permeability. The results showed that the pressure difference increased with the stage cut, but the selectivity was a trade-off for the increasing pressure difference. Additionally, even though permeability is an important parameter, the selectivity and stage cut of the membrane converged as permeability increased.

  19. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...

  20. Simple Models for the Dynamic Modeling of Rotating Tires

    Directory of Open Access Journals (Sweden)

    J.C. Delamotte

    2008-01-01

    Full Text Available Large Finite Element (FE models of tires are currently used to predict low frequency behavior and to obtain dynamic model coefficients used in multi-body models for riding and comfort. However, to predict higher frequency behavior, which may explain irregular wear, critical rotating speeds and noise radiation, FE models are not practical. Detailed FE models are not adequate for optimization and uncertainty predictions either, as in such applications the dynamic solution must be computed a number of times. Therefore, there is a need for simpler models that can capture the physics of the tire and be used to compute the dynamic response with a low computational cost. In this paper, the spectral (or continuous element approach is used to derive such a model. A circular beam spectral element that takes into account the string effect is derived, and a method to simulate the response to a rotating force is implemented in the frequency domain. The behavior of a circular ring under different internal pressures is investigated using modal and frequency/wavenumber representations. Experimental results obtained with a real untreaded truck tire are presented and qualitatively compared with the simple model predictions with good agreement. No attempt is made to obtain equivalent parameters for the simple model from the real tire results. On the other hand, the simple model fails to represent the correct variation of the quotient of the natural frequency by the number of circumferential wavelengths with the mode count. Nevertheless, some important features of the real tire dynamic behavior, such as the generation of standing waves and part of the frequency/wavenumber behavior, can be investigated using the proposed simplified model.

  1. BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS

    Directory of Open Access Journals (Sweden)

    Bartolo de Jesús Villar-Hernández

    2015-08-01

    Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-Cuicatlán biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â

  2. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  3. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  4. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  5. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  6. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  7. IHY Modeling Support at the Community Coordinated Modeling Center

    Science.gov (United States)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  8. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  9. Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.

  10. Petroleum Market Model of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule

  11. Modeling influenza-like illnesses through composite compartmental models

    Science.gov (United States)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  12. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  13. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  14. An improved interfacial bonding model for material interface modeling

    Science.gov (United States)

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  15. Modelling freight transport

    NARCIS (Netherlands)

    Tavasszy, L.A.; Jong, G. de

    2014-01-01

    Freight Transport Modelling is a unique new reference book that provides insight into the state-of-the-art of freight modelling. Focusing on models used to support public transport policy analysis, Freight Transport Modelling systematically introduces the latest freight transport modelling

  16. Adhesive contact: from atomistic model to continuum model

    International Nuclear Information System (INIS)

    Fan Kang-Qi; Jia Jian-Yuan; Zhu Ying-Min; Zhang Xiu-Yan

    2011-01-01

    Two types of Lennard-Jones potential are widely used in modeling adhesive contacts. However, the relationships between the parameters of the two types of Lennard-Jones potential are not well defined. This paper employs a self-consistent method to derive the Lennard-Jones surface force law from the interatomic Lennard-Jones potential with emphasis on the relationships between the parameters. The effect of using correct parameters in the adhesion models is demonstrated in single sphere-flat contact via continuum models and an atomistic model. Furthermore, the adhesion hysteresis behaviour is investigated, and the S-shaped force-distance relation is revealed by the atomistic model. It shows that the adhesion hysteresis loop is generated by the jump-to-contact and jump-off-contact, which are illustrated by the S-shaped force-distance curve. (atomic and molecular physics)

  17. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  18. Finsler Geometry Modeling of an Orientation-Asymmetric Surface Model for Membranes

    Science.gov (United States)

    Proutorov, Evgenii; Koibuchi, Hiroshi

    2017-12-01

    In this paper, a triangulated surface model is studied in the context of Finsler geometry (FG) modeling. This FG model is an extended version of a recently reported model for two-component membranes, and it is asymmetric under surface inversion. We show that the definition of the model is independent of how the Finsler length of a bond is defined. This leads us to understand that the canonical (or Euclidean) surface model is obtained from the FG model such that it is uniquely determined as a trivial model from the viewpoint of well definedness.

  19. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  20. Biosphere Model Report

    International Nuclear Information System (INIS)

    D.W. Wu; A.J. Smith

    2004-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  1. Model documentation: Natural Gas Transmission and Distribution Model of the National Energy Modeling System; Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-02-24

    The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).

  2. A multi-model assessment of terrestrial biosphere model data needs

    Science.gov (United States)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  3. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  4. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  5. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  6. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    Science.gov (United States)

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  7. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  8. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  9. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...... the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given....

  10. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  11. Algebraic formulation of collective models. I. The mass quadrupole collective model

    International Nuclear Information System (INIS)

    Rosensteel, G.; Rowe, D.J.

    1979-01-01

    This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model

  12. Modeling of immision from power plants using stream-diffusion model

    International Nuclear Information System (INIS)

    Kanevce, Lj.; Kanevce, G.; Markoski, A.

    1996-01-01

    Analyses of simple empirical and integral immision models, comparing with complex three dimensional differential models is given. Complex differential models needs huge computer power, so they can't be useful for practical engineering calculations. In this paper immision modeling, using stream-diffusion approach is presented. Process of dispersion is divided into two parts. First part is called stream part, it's near the source of the pollutants, and it's presented with defected turbulent jet in wind field. This part finished when the velocity of stream (jet) becomes equal with wind speed. Boundary conditions in the end of the first part, are initial for the second, called diffusion part, which is modeling with tri dimensional diffusion equation. Gradient of temperature, wind speed profile and coefficient of diffusion in this model must not be constants, they can change with the height. Presented model is much simpler than the complete meteorological differential models which calculates whole fields of meteorological parameters. Also, it is more complex and gives more valuable results for dispersion of pollutants from widely used integral and empirical models

  13. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  14. Mathematical models for sleep-wake dynamics: comparison of the two-process model and a mutual inhibition neuronal model.

    Directory of Open Access Journals (Sweden)

    Anne C Skeldon

    Full Text Available Sleep is essential for the maintenance of the brain and the body, yet many features of sleep are poorly understood and mathematical models are an important tool for probing proposed biological mechanisms. The most well-known mathematical model of sleep regulation, the two-process model, models the sleep-wake cycle by two oscillators: a circadian oscillator and a homeostatic oscillator. An alternative, more recent, model considers the mutual inhibition of sleep promoting neurons and the ascending arousal system regulated by homeostatic and circadian processes. Here we show there are fundamental similarities between these two models. The implications are illustrated with two important sleep-wake phenomena. Firstly, we show that in the two-process model, transitions between different numbers of daily sleep episodes can be classified as grazing bifurcations. This provides the theoretical underpinning for numerical results showing that the sleep patterns of many mammals can be explained by the mutual inhibition model. Secondly, we show that when sleep deprivation disrupts the sleep-wake cycle, ostensibly different measures of sleepiness in the two models are closely related. The demonstration of the mathematical similarities of the two models is valuable because not only does it allow some features of the two-process model to be interpreted physiologically but it also means that knowledge gained from study of the two-process model can be used to inform understanding of the behaviour of the mutual inhibition model. This is important because the mutual inhibition model and its extensions are increasingly being used as a tool to understand a diverse range of sleep-wake phenomena such as the design of optimal shift-patterns, yet the values it uses for parameters associated with the circadian and homeostatic processes are very different from those that have been experimentally measured in the context of the two-process model.

  15. Crop rotation modelling-A European model intercomparison

    Czech Academy of Sciences Publication Activity Database

    Kollas, C.; Kersebaum, K. C.; Nendel, C.; Manevski, K.; Müller, C.; Palosuo, T.; Armas-Herrera, C.; Beaudoin, N.; Bindi, M.; Charefeddine, M.; Conradt, T.; Constantin, J.; Eitzinger, J.; Ewert, F.; Ferrise, R.; Gaiser, T.; de Cortazar-Atauri, I. G.; Giglio, L.; Hlavinka, Petr; Hoffman, H.; Hofmann, M.; Launay, M.; Manderscheid, R.; Mary, B.; Mirschel, W.; Moriondo, M.; Olesen, J. E.; Öztürk, I.; Pacholski, A.; Ripoche-Wachter, D.; Roggero, P. P.; Roncossek, S.; Rötter, R. P.; Ruget, F.; Sharif, B.; Trnka, Miroslav; Ventrella, D.; Waha, K.; Wegehenkel, M.; Weigel, H-J.; Wu, L.

    2015-01-01

    Roč. 70, oct (2015), s. 98-111 ISSN 1161-0301 Institutional support: RVO:67179843 Keywords : model ensemble * crop simulation models * catch crop * intermediate crop * treatment * Multi-year Subject RIV: GC - Agronomy Impact factor: 3.186, year: 2015

  16. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  17. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  18. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  19. Pengaruh Struktur Kepemilikan Saham Leverage Faktor Intern Dan Faktor Ekstern Terhadap Nilai Perusahaan (Studi empirik pada perusahaan manufaktur dan non manufaktur di Bursa Efek Jakarta

    Directory of Open Access Journals (Sweden)

    Sujoko

    2007-01-01

    modal yang sudah maju%2C problem keagenan di Bursa Efek Jakarta adalah terjadinya perbedaan kepentingan antara pemilik minoritas dengan pemilik mayoritas. Hipotesis dalam penelitian ini adalah : (1 struktur kepemilikan%2C faktor ekstern%2C dan faktor intern berpengaruh signifikan terhadap leverage%2C (2 struktur kepemilikan%2C faktor ekstern%2C faktor intern dan leverage berpengaruh signifikan terhadap nilai perusahaan. Studi ini ingin menguji teori keagenan%2C Jensen dan Meckling (1976%2C Pecking Order Theory%2C Myers (1984%2C Trade off model dan Signaling Theory%2C Bhattacharya (1979. Populasi dalam studi ini adalah perusahaan publik yang terdaftar di Bursa Efek Jakarta. Sebanyak 134 perusahaan diambil sebagai sample dengan enggunakan purposive sampling. Data dianalisis dengan mengunakan Structural Equation Modelling. Hasil studi ini menunjukkan bahwa struktur kepemilikan%2C faktor intern dan faktor ekstern berpengaruh signifikan terhadap leverage. Struktur kepemilikan%2C faktor ekstern%2C faktor intern%2Cdan leverage berpengaruh signifikan terhadap nilai perusahaan.Hasil studi ini tidak mendukung teori keagenan%2C Jensen dan Meckling (1976 tetapi hasil studi ini mendukung Pecking Order Theory%2C Myers (1984%2C Trade off model dan Signaling theory%2C Bhattacarya (1979. Tobins’Q%2C leverage%2C ownership structure.

  20. Transport properties site descriptive model. Guidelines for evaluation and modelling

    International Nuclear Information System (INIS)

    Berglund, Sten; Selroos, Jan-Olof

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of

  1. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  2. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  3. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  4. PEMODELAN DAERAH TANGKAPAN AIR WADUK KELILING DENGAN MODEL SWAT (Keliling Reservoir Catchment Area Modeling Using SWAT Model

    Directory of Open Access Journals (Sweden)

    Teuku Ferijal

    2015-05-01

    Full Text Available This study aimed to model watershed area of Keliling Reservoir using SWAT model. The reservoir is located in Aceh Besar District, Province of Aceh. The model was setup using 90m x 90m digital elevation model, land use data extracted from remote sensing data and soil characteristic obtained from laboratory analysis on soil samples. Model was calibrated using observed daily reservoir volume and the model performance was analyzed using RMSE-observations standard deviation ratio (RSR, Nash-Sutcliffe efficiency (NSE and percent bias (PBIAS. The model delineated the study area into 3,448 Ha having 13 subwatersheds and 76 land units (HRUs. The watershed is mostly covered by forest (53% and grassland (31%. The analysis revealed the 10 most sensitive parameters i.e. GW_DELAY, CN2, REVAPMN, ALPHA_BF, SOL_AWC, GW_REVAP, GWQMN, CH_K2 and ESCO. Model performances were categorized into very good for monthly reservoir volume with ENS 0.95, RSR 0.23, and PBIAS 2.97. The model performance decreased when it used to analyze daily reservoir inflow with ENS 0.55, RSR 0.67, and PBIAS 3.46. Keywords: Keliling Reservoir, SWAT, Watershed   ABSTRAK Penelitian ini bertujuan untuk untuk memodelkan daerah tangkapan air Waduk Keliling dengan menggunakan Model SWAT. Waduk Keliling terletak di Kabupaten Aceh Besar, Propinsi Aceh. Dalam penelitian ini Model SWAT dikembangkan berdasarkan data digital elevasi model resolusi 90 m x90 m, tata guna lahan yang diperoleh dari intepretasi citra satelit dan data soil dari hasil analisa sampel tanah yang diperoleh di daerah penelitian. Model dikalibrasi dengan data volume waduk dan kinerja model dianalisa menggunakan parameter rasio akar rata-rata kuadrat error dan standard deviasi observasi (RSR, efesiensi Nash-Sutcliffe (NSE dan persentase bias (PBIAS. Hasil deleniasi untuk daerah penelitian menghasilkan suatu DAS dengan luas 3,448 Ha dan memiliki 13 Sub DAS yang dikelompokkan menjadi 76 unit lahan. Sebagian besar wilayah study

  5. CPsup(N-1) model: a toy model for QCD

    International Nuclear Information System (INIS)

    Cant, R.J.; Davis, A.C.

    1979-01-01

    The authors examine the CP 2 sup(N-1) models and discuss their relevance as toy models for QCD 4 . Specifically, they study the role of instantons, theta vacua, and confinement in the 1/N expansion. The results, and comparisons with other two-dimensional models, suggest that most of the interesting features of these models are peculiarities of two-dimensional space-time and cannot be expected to reappear in QCD 4 . (Auth.)

  6. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  7. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  8. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Science.gov (United States)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  9. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  10. SEP modeling based on global heliospheric models at the CCMC

    Science.gov (United States)

    Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Bain, H. M.; Schwadron, N.; Gorby, M.; Li, Y.; Lee, K.; Zeitlin, C.; Jian, L. K.; Lee, C. O.; Mewaldt, R. A.; Galvin, A. B.

    2017-12-01

    Heliospheric models provide contextual information of conditions in the heliosphere, including the background solar wind conditions and shock structures, and are used as input to SEP models, providing an essential tool for understanding SEP properties. The global 3D MHD WSA-ENLIL+Cone model provides a time-dependent background heliospheric description, into which a spherical shaped hydrodynamic CME can be inserted. ENLIL simulates solar wind parameters and additionally one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. An accurate representation of the background solar wind is necessary for simulating transients. ENLIL simulations also drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and the Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al. 2010). The Community Coordinated Modeling Center (CCMC) is in the process of making these SEP models available to the community and offering a system to run SEP models driven by a variety of heliospheric models available at CCMC. SEPMOD injects protons onto a sequence of observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. The coupled SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. The coupled ENLIL and SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. In this presentation we demonstrate several case studies of SEP event modeling at different observers based on WSA-ENLIL+Cone simulations.

  11. Map algebra and model algebra for integrated model building

    NARCIS (Netherlands)

    Schmitz, O.; Karssenberg, D.J.; Jong, K. de; Kok, J.-L. de; Jong, S.M. de

    2013-01-01

    Computer models are important tools for the assessment of environmental systems. A seamless workflow of construction and coupling of model components is essential for environmental scientists. However, currently available software packages are often tailored either to the construction of model

  12. CCF model comparison

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    2004-04-01

    The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)

  13. Wake modelling combining mesoscale and microscale models

    DEFF Research Database (Denmark)

    Badger, Jake; Volker, Patrick; Prospathospoulos, J.

    2013-01-01

    In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake paramet...

  14. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    Science.gov (United States)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  15. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Science.gov (United States)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  16. STAMINA - Model description. Standard Model Instrumentation for Noise Assessments

    NARCIS (Netherlands)

    Schreurs EM; Jabben J; Verheijen ENG; CMM; mev

    2010-01-01

    Deze rapportage beschrijft het STAMINA-model, dat staat voor Standard Model Instrumentation for Noise Assessments en door het RIVM is ontwikkeld. Het instituut gebruikt dit standaardmodel om omgevingsgeluid in Nederland in kaart te brengen. Het model is gebaseerd op de Standaard Karteringsmethode

  17. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling....... the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  18. The Trimeric Model: A New Model of Periodontal Treatment Planning

    Science.gov (United States)

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  19. Model building

    International Nuclear Information System (INIS)

    Frampton, Paul H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA

  20. Composite hadron models

    International Nuclear Information System (INIS)

    Ogava, S.; Savada, S.; Nakagava, M.

    1983-01-01

    Composite models of hadrons are considered. The main attention is paid to the Sakata, S model. In the framework of the model it is presupposed that proton, neutron and Λ particle are the fundamental particles. Theoretical studies of unknown fundamental constituents of a substance have led to the creation of the quark model. In the framework of the quark model using the theory of SU(6)-symmetry the classification of mesons and baryons is considered. Using the quark model relations between hadron masses, their spins and electromagnetic properties are explained. The problem of three-colour model with many flavours is briefly presented

  1. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  2. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  3. Model Sistem Informasi Manajemen Sekolah Berbasiskan Notasi Unified Modeling Language

    Directory of Open Access Journals (Sweden)

    Yohannes Kurniawan

    2013-12-01

    Full Text Available Basically the use of integrated information systems can be applied not only for the company, but also education industry, particularly schools. To support business processes at the school, this research would like to describe a conceptual model of information systems using the Unified Modeling Language (UML notationwith "4 +1 View" architectural model. This model is expected to assist analysis and design the whole business processes at school. A conceptual model of the information system can help application developers to easily and clearly understand the school system. By adopting this information system model, schools are able to have effective understanding of management information systems.

  4. An Agent Model Integrating an Adaptive Model for Environmental Dynamics

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    The environments in which agents are used often may be described by dynamical models, e.g., in the form of a set of differential equations. In this paper, an agent model is proposed that can perform model-based reasoning about the environment, based on a numerical (dynamical system) model of the

  5. Model unspecific search in CMS. Model unspecific limits

    Energy Technology Data Exchange (ETDEWEB)

    Knutzen, Simon; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The standard model of particle physics is increasingly challenged by recent discoveries and also by long known phenomena, representing a strong motivation to develop extensions of the standard model. The amount of theories describing possible extensions is large and steadily growing. In this presentation a new approach is introduced, verifying if a given theory beyond the standard model is consistent with data collected by the CMS detector without the need to perform a dedicated search. To achieve this, model unspecific limits on the number of additional events above the standard model expectation are calculated in every event class produced by the MUSiC algorithm. Furthermore, a tool is provided to translate these results into limits on the signal cross section of any theory. In addition to the general procedure, first results and examples are shown using the proton-proton collision data taken at a centre of mass energy of 8 TeV.

  6. Model-Independent Diffs

    DEFF Research Database (Denmark)

    Könemann, Patrick

    just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...

  7. The CAFE model: A net production model for global ocean phytoplankton

    Science.gov (United States)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  8. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    Science.gov (United States)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  9. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  10. Model building

    International Nuclear Information System (INIS)

    Frampton, P.H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA. copyright 1998 American Institute of Physics

  11. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  12. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  13. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  14. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  15. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  16. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Directory of Open Access Journals (Sweden)

    C Brandon Ogbunugafor

    Full Text Available Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL. Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  17. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Science.gov (United States)

    Ogbunugafor, C Brandon; Robinson, Sean P

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  18. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    , these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...

  19. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  20. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    Science.gov (United States)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  1. On the equivalence between sine-Gordon model and Thirring model in the chirally broken phase of the Thirring model

    International Nuclear Information System (INIS)

    Faber, M.; Ivanov, A.N.

    2001-01-01

    We investigate the equivalence between Thirring model and sine-Gordon model in the chirally broken phase of the Thirring model. This is unlike all other available approaches where the fermion fields of the Thirring model were quantized in the chiral symmetric phase. In the path integral approach we show that the bosonized version of the massless Thirring model is described by a quantum field theory of a massless scalar field and exactly solvable, and the massive Thirring model bosonizes to the sine-Gordon model with a new relation between the coupling constants. We show that the non-perturbative vacuum of the chirally broken phase in the massless Thirring model can be described in complete analogy with the BCS ground state of superconductivity. The Mermin-Wagner theorem and Coleman's statement concerning the absence of Goldstone bosons in the 1+1-dimensional quantum field theories are discussed. We investigate the current algebra in the massless Thirring model and give a new value of the Schwinger term. We show that the topological current in the sine-Gordon model coincides with the Noether current responsible for the conservation of the fermion number in the Thirring model. This allows one to identify the topological charge in the sine-Gordon model with the fermion number. (orig.)

  2. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  3. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  4. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  5. On coupling global biome models with climate models

    OpenAIRE

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...

  6. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  7. Template for Conceptual Model Construction: Model Review and Corps Applications

    National Research Council Canada - National Science Library

    Henderson, Jim E; O'Neil, L. J

    2007-01-01

    .... The template will expedite conceptual model construction by providing users with model parameters and potential model components, building on a study team's knowledge and experience, and promoting...

  8. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  9. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  10. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  11. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  12. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    Science.gov (United States)

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  14. Coupling population dynamics with earth system models: the POPEM model.

    Science.gov (United States)

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  15. Applications of the k – ω Model in Stellar Evolutionary Models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yan, E-mail: ly@ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216 (China)

    2017-05-20

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynolds stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.

  16. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  17. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  18. The reservoir model: a differential equation model of psychological regulation.

    Science.gov (United States)

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  19. Development of bubble-induced turbulence model for advanced two-fluid model

    International Nuclear Information System (INIS)

    Hosoi, Hideaki; Yoshida, Hiroyuki

    2011-01-01

    A two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method. The two-fluid model is therefore useful for thermal hydraulic analysis in the large-scale domain such as rod bundles. However, since the two-fluid model includes a lot of constitutive equations verified by use of experimental results, it has problems that the result of analyses depends on accuracy of the constitutive equations. To solve these problems, an advanced two-fluid model has been developed by Japan Atomic Energy Agency. In this model, interface tracking method is combined with two-fluid model to accurately predict large interface structure behavior. Liquid clusters and bubbles larger than a computational cell are calculated using the interface tracking method, and those smaller than the cell are simulated by the two-fluid model. The constitutive equations to evaluate the effects of small bubbles or droplets on two-phase flow are also required in the advanced two-fluid model, just as with the conventional two-fluid model. However, the dependency of small bubbles and droplets on two-phase flow characteristics is relatively small, and fewer experimental results are required to verify the characteristics of large interface structures. Turbulent dispersion force model is one of the most important constitutive equations for the advanced two-fluid model. The turbulent dispersion force model has been developed by many researchers for the conventional two-fluid model. However, existing models implicitly include the effects of large bubbles and the deformation of bubbles, and are unfortunately not applicable to the advanced two-fluid model. In the previous study, the authors suggested the turbulent dispersion force model based on the analogy of Brownian motion. And the authors improved the turbulent dispersion force model in consideration of bubble-induced turbulence to improve the analysis results for small

  20. Modeling environmental policy

    International Nuclear Information System (INIS)

    Martin, W.E.; McDonald, L.A.

    1997-01-01

    The eight book chapters demonstrate the link between the physical models of the environment and the policy analysis in support of policy making. Each chapter addresses an environmental policy issue using a quantitative modeling approach. The volume addresses three general areas of environmental policy - non-point source pollution in the agricultural sector, pollution generated in the extractive industries, and transboundary pollutants from burning fossil fuels. The book concludes by discussing the modeling efforts and the use of mathematical models in general. Chapters are entitled: modeling environmental policy: an introduction; modeling nonpoint source pollution in an integrated system (agri-ecological); modeling environmental and trade policy linkages: the case of EU and US agriculture; modeling ecosystem constraints in the Clean Water Act: a case study in Clearwater National Forest (subject to discharge from metal mining waste); costs and benefits of coke oven emission controls; modeling equilibria and risk under global environmental constraints (discussing energy and environmental interrelations); relative contribution of the enhanced greenhouse effect on the coastal changes in Louisiana; and the use of mathematical models in policy evaluations: comments. The paper on coke area emission controls has been abstracted separately for the IEA Coal Research CD-ROM

  1. Systemic resilience model

    International Nuclear Information System (INIS)

    Lundberg, Jonas; Johansson, Björn JE

    2015-01-01

    It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies

  2. Differential Topic Models.

    Science.gov (United States)

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  3. Making sense to modelers: Presenting UML class model differences in prose

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present an...... by a controlled experiment that tests three alternatives to presenting model differences. Our findings support our claim that the approach presented here is superior to EMF Compare.......Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present...... an alternative approach where sets of low-level model differences are abstracted into high-level model differences that lend themselves to being presented textually. This format is informed by an explorative survey to elicit the change descriptions modelers use themselves. Our approach is validated...

  4. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  5. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  6. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    Science.gov (United States)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative

  7. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  8. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  9. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  10. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  11. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    Science.gov (United States)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  12. Validation of community models: 3. Tracing field lines in heliospheric models

    Science.gov (United States)

    MacNeice, Peter; Elliott, Brian; Acebal, Ariel

    2011-10-01

    Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.

  13. Modeling promoter grammars with evolving hidden Markov models

    DEFF Research Database (Denmark)

    Won, Kyoung-Jae; Sandelin, Albin; Marstrand, Troels Torben

    2008-01-01

    MOTIVATION: Describing and modeling biological features of eukaryotic promoters remains an important and challenging problem within computational biology. The promoters of higher eukaryotes in particular display a wide variation in regulatory features, which are difficult to model. Often several...... factors are involved in the regulation of a set of co-regulated genes. If so, promoters can be modeled with connected regulatory features, where the network of connections is characteristic for a particular mode of regulation. RESULTS: With the goal of automatically deciphering such regulatory structures......, we present a method that iteratively evolves an ensemble of regulatory grammars using a hidden Markov Model (HMM) architecture composed of interconnected blocks representing transcription factor binding sites (TFBSs) and background regions of promoter sequences. The ensemble approach reduces the risk...

  14. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  15. Dynamics models and modeling of tree stand development

    Directory of Open Access Journals (Sweden)

    M. V. Rogozin

    2015-04-01

    Full Text Available Brief analysis of scientific works in Russia and in the CIS over the past 100 years. Logical and mathematical models consider the conceptual and show some of the results of their verification. It was found that the models include different laws and the parameters, the sum of which allows you to divide them into four categories: models of static states, development models, models of care for the natural forest and models of cultivation. Each category has fulfilled and fulfills its tasks in economic management. Thus, the model states in statics (table traverse growth played a prominent role in figuring out what may be the most productive (full stands in different regions of the country. However, they do not answer the question of what the initial states lead to the production of complete stands. In a study of the growth of stands used system analysis, and it is observed dominance of works studying static state, snatched from the biological time. Therefore, the real drama of the growth of stands remained almost unexplored. It is no accident there were «chrono-forestry» «plantation forestry» and even «non-traditional forestry», where there is a strong case of a number of new concepts of development stands. That is quite in keeping with Kuhn (Kuhn, 2009 in the forestry crisis began – there were alternative theories and coexist conflicting scientific schools. To develop models of stand development, it is proposed to use a well-known method of repeated observations within 10–20 years, in conjunction with the explanation of the history of the initial density. It mounted on the basis of studying the dynamics of its indicators: the trunk, crown overlap coefficient, the sum of volumes of all crowns and the relative length of the crown. According to these indicators, the researcher selects natural series of development stands with the same initial density. As a theoretical basis for the models it is possible to postulate the general properties of

  16. PORTER S FIVE FORCES MODEL SCOTT MORTON S FIVE FORCES MODEL BAKOS TREACY MODEL ANALYZES STRATEGIC INFORMATION SYSTEMS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Indra Gamayanto

    2004-01-01

    Full Text Available Wollongong City Council (WCC is one of the most progressive and innovative local government organizations in Australia. Wollongong City Council use Information Technology to gain the competitive advantage and to face a global economy in the future. Porter's Five Force model is one of the models that can be using at Wollongong City Council because porter's five Forces model has strength in relationship between buyer and suppliers (Bargaining power of suppliers and bargaining power of buyers. Other model such as Scott Morton's Five Forces model has strength to analyze the social impact factor, so to gain competitive advantage in the future and have a good IT/IS strategic planning; this model can be use also. Bakos & Treacy model almost the same as Porter's model but Bakos & Treacy model can also be applying into Wollongong City Council to improve the capability in Transforming organization, efficiency, and effectiveness.

  17. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  18. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  19. Can the Stephani model be an alternative to FRW accelerating models?

    International Nuclear Information System (INIS)

    Godlowski, Wlodzimierz; Stelmach, Jerzy; Szydlowski, Marek

    2004-01-01

    A class of Stephani cosmological models as a prototype of a non-homogeneous universe is considered. The non-homogeneity can lead to accelerated evolution, which is now observed from the SNe Ia data. Three samples of type Ia supernovae obtained by Perlmutter et al, Tonry et al and Knop et al are taken into account. Different statistical methods (best fits as well as maximum likelihood method) to obtain estimation for the model parameters are used. The Stephani model is considered as an alternative to the ΛCDM model in the explanation of the present acceleration of the universe. The model explains the acceleration of the universe at the same level of accuracy as the ΛCDM model (χ 2 statistics are comparable). From the best fit analysis it follows that the Stephani model is characterized by a higher value of density parameter Ω m0 than the ΛCDM model. It is also shown that the model is consistent with the location of CMB peaks

  20. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  1. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Rahgoshay, M.; Hashemi-Tilehnoee, M.

    2012-01-01

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  2. Towards a standard model for research in agent-based modeling and simulation

    Directory of Open Access Journals (Sweden)

    Nuno Fachada

    2015-11-01

    Full Text Available Agent-based modeling (ABM is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. ABMs are very sensitive to implementation details. Thus, it is very easy to inadvertently introduce changes which modify model dynamics. Such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. In this paper, we present PPHPC, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. This paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output, also providing a canonical implementation of PPHPC. The paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their ABMs.

  3. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  4. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  5. STEREOMETRIC MODELLING

    Directory of Open Access Journals (Sweden)

    P. Grimaldi

    2012-07-01

    Full Text Available These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : – the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program; – the shot visualization in two distinct windows – the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  6. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  7. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  8. A model for photothermal responses of flowering in rice. II. Model evaluation.

    NARCIS (Netherlands)

    Yin, X.; Kropff, M.J.; Nakagawa, H.; Horie, T.; Goudriaan, J.

    1997-01-01

    A detailed nonlinear model, the 3s-Beta model, for photothermal responses of flowering in rice (Oryza sativa L.) was evaluated for predicting rice flowering date in field conditions. This model was compared with other three models: a three-plane linear model and two nonlinear models, viz, the

  9. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    Science.gov (United States)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  10. Limits with modeling data and modeling data with limits

    Directory of Open Access Journals (Sweden)

    Lionello Pogliani

    2006-01-01

    Full Text Available Modeling of the solubility of amino acids and purine and pyrimidine bases with a set of sixteen molecular descriptors has been thoroughly analyzed to detect and understand the reasons for anomalies in the description of this property for these two classes of compounds. Unsatisfactory modeling can be ascribed to incomplete collateral data, i.e, to the fact that there is insufficient data known about the behavior of these compounds in solution. This is usually because intermolecular forces cannot be modeled. The anomalous modeling can be detected from the rather large values of the standard deviation of the estimates of the whole set of compounds, and from the unsatisfactory modeling of some of the subsets of these compounds. Thus the detected abnormalities can be used (i to get an idea about weak intermolecular interactions such as hydration, self-association, the hydrogen-bond phenomena in solution, and (ii to reshape the molecular descriptors with the introduction of parameters that allow better modeling. This last procedure should be used with care, bearing in mind that the solubility phenomena is rather complex.

  11. Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2015-01-01

    Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....

  12. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    Science.gov (United States)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  13. Building generic anatomical models using virtual model cutting and iterative registration

    Directory of Open Access Journals (Sweden)

    Hallgrímsson Benedikt

    2010-02-01

    Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and

  14. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  15. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  16. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  17. Modelling open pit shovel-truck systems using the Machine Repair Model

    Energy Technology Data Exchange (ETDEWEB)

    Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine

    2007-08-15

    Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.

  18. CFD Wake Modelling with a BEM Wind Turbine Sub-Model

    Directory of Open Access Journals (Sweden)

    Anders Hallanger

    2013-01-01

    Full Text Available Modelling of wind farms using computational fluid dynamics (CFD resolving the flow field around each wind turbine's blades on a moving computational grid is still too costly and time consuming in terms of computational capacity and effort. One strategy is to use sub-models for the wind turbines, and sub-grid models for turbulence production and dissipation to model the turbulent viscosity accurately enough to handle interaction of wakes in wind farms. A wind turbine sub-model, based on the Blade Momentum Theory, see Hansen (2008, has been implemented in an in-house CFD code, see Hallanger et al. (2002. The tangential and normal reaction forces from the wind turbine blades are distributed on the control volumes (CVs at the wind turbine rotor location as sources in the conservation equations of momentum. The classical k-epsilon turbulence model of Launder and Spalding (1972 is implemented with sub-grid turbulence (SGT model, see Sha and Launder (1979 and Sand and Salvesen (1994. Steady state CFD simulations were compared with flow and turbulence measurements in the wake of a model scale wind turbine, see Krogstad and Eriksen (2011. The simulated results compared best with experiments when stalling (boundary layer separation on the wind turbine blades did not occur. The SGT model did improve turbulence level in the wake but seems to smear the wake flow structure. It should be noted that the simulations are carried out steady state not including flow oscillations caused by vortex shedding from tower and blades as they were in the experiments. Further improvement of the simulated velocity defect and turbulence level seems to rely on better parameter estimation to the SGT model, improvements to the SGT model, and possibly transient- instead of steady state simulations.

  19. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  20. Modeling Historical Land Cover and Land Use: A Review fromContemporary Modeling

    Directory of Open Access Journals (Sweden)

    Laura Alfonsina Chang-Martínez

    2015-09-01

    Full Text Available Spatially-explicit land cover land use change (LCLUC models are becoming increasingly useful tools for historians and archaeologists. Such kinds of models have been developed and used by geographers, ecologists and land managers over the last few decades to carry out prospective scenarios. In this paper, we review historical models to compare them with prospective models, with the assumption that the ample experience gained in the development of models of prospective simulation can benefit the development of models having as their objective the simulation of changes that happened in the past. The review is divided into three sections: in the first section, we explain the functioning of contemporary LCLUC models; in the second section, we analyze historical LCLUC models; in the third section, we compare the former two types of models, and finally, we discuss the contributions to historical LCLUC models of contemporary LCLUC models.

  1. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  2. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  3. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  4. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  5. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  6. On coupling global biome models with climate models

    International Nuclear Information System (INIS)

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)

  7. Models in Science Education: Applications of Models in Learning and Teaching Science

    Science.gov (United States)

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  8. Weighted-indexed semi-Markov models for modeling financial returns

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo

    2012-01-01

    In this paper we propose a new stochastic model based on a generalization of semi-Markov chains for studying the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted-indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series such as the first-passage-time distributions and the persistence of volatility. The model is applied to data from the Italian and German stock markets from 1 January 2007 until the end of December 2010. (paper)

  9. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  10. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  11. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  12. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  13. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  15. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  16. Model documentation renewable fuels module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA`s ongoing mission to provide analytical and forecasting information systems.

  17. Model documentation renewable fuels module of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA's ongoing mission to provide analytical and forecasting information systems

  18. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  19. OPEC model : adjustment or new model

    International Nuclear Information System (INIS)

    Ayoub, A.

    1994-01-01

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  20. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  1. Metamodeling for Business Model Design : Facilitating development and communication of Business Model Canvas (BMC) models with an OMG standards-based metamodel.

    OpenAIRE

    Hauksson, Hilmar

    2013-01-01

    Interest for business models and business modeling has increased rapidly since the mid-1990‘s and there are numerous approaches used to create business models. The business model concept has many definitions which can lead to confusion and slower progress in the research and development of business models. A business model ontology (BMO) was created in 2004 where the business model concept was conceptualized based on an analysis of existing literature. A few years later the Business Model Can...

  2. Thermal Models of the Niger Delta: Implications for Charge Modelling

    International Nuclear Information System (INIS)

    Ejedawe, J.

    2002-01-01

    There are generally three main sources of temperature data-BHT data from log headers, production temperature data, and continuo's temperature logs. Analysis of continuous temperature profiles of over 100 wells in the Niger Delta two main thermal models (single leg and dogleg) are defined with occasional occurrence of a modified dogleg model.The dogleg model is characterised by a shallow interval of low geothermal gradient ( 3.0.C/100m). This is characteristically developed onshore area is simple, requiring only consideration of heat transients, modelling in the onshore require modelling programmes with built in modules to handle convective heat flow dissipation in the shallow layer. Current work around methods would involve tweaking of thermal conductivity values to mimic the underlying heat flow process effects, or heat flow mapping above and below the depth of gradient change. These methods allow for more realistic thermal modelling, hydrocarbon type prediction, and also more accurate prediction of temperature prior to drilling and for reservoir rock properties. The regional distribution of the models also impact on regional hydrocarbon distribution pattern in the Niger Delta

  3. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  4. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  5. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  6. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. VENTILATION MODEL

    International Nuclear Information System (INIS)

    V. Chipman

    2002-01-01

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses

  8. Tracer disposition kinetics in the determination of local cerebral blood flow by a venous equilibrium model, tube model, and distributed model

    International Nuclear Information System (INIS)

    Sawada, Y.; Sugiyama, Y.; Iga, T.; Hanano, M.

    1987-01-01

    Tracer distribution kinetics in the determination of local cerebral blood flow (LCBF) were examined by using three models, i.e., venous equilibrium, tube, and distributed models. The technique most commonly used for measuring LCBF is the tissue uptake method, which was first developed and applied by Kety. The measurement of LCBF with the 14 C-iodoantipyrine (IAP) method is calculated by using an equation derived by Kety based on the Fick's principle and a two-compartment model of blood-tissue exchange and tissue concentration at a single data point. The procedure, in which the tissue is to be in equilibrium with venous blood, will be referred to as the tissue equilibration model. In this article, effects of the concentration gradient of tracer along the length of the capillary (tube model) and the transverse heterogeneity in the capillary transit time (distributed model) on the determination of LCBF were theoretically analyzed for the tissue sampling method. Similarities and differences among these models are explored. The rank order of the LCBF calculated by using arterial blood concentration time courses and the tissue concentration of tracer based on each model were tube model (model II) less than distributed model (model III) less than venous equilibrium model (model I). Data on 14 C-IAP kinetics reported by Ohno et al. were employed. The LCBFs calculated based on model I were 45-260% larger than those in models II or III. To discriminate among three models, we propose to examine the effect of altering the venous infusion time of tracer on the apparent tissue-to-blood concentration ratio (lambda app). A range of the ratio of the predicted lambda app in models II or III to that in model I was from 0.6 to 1.3

  9. Functionalized anatomical models for EM-neuron Interaction modeling

    Science.gov (United States)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  10. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  11. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  12. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  13. Modeling Renewable Penertration Using a Network Economic Model

    Science.gov (United States)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  14. Finite element modeling of a 3D coupled foot-boot model.

    Science.gov (United States)

    Qiu, Tian-Xia; Teo, Ee-Chon; Yan, Ya-Bo; Lei, Wei

    2011-12-01

    Increasingly, musculoskeletal models of the human body are used as powerful tools to study biological structures. The lower limb, and in particular the foot, is of interest because it is the primary physical interaction between the body and the environment during locomotion. The goal of this paper is to adopt the finite element (FE) modeling and analysis approaches to create a state-of-the-art 3D coupled foot-boot model for future studies on biomechanical investigation of stress injury mechanism, foot wear design and parachute landing fall simulation. In the modeling process, the foot-ankle model with lower leg was developed based on Computed Tomography (CT) images using ScanIP, Surfacer and ANSYS. Then, the boot was represented by assembling the FE models of upper, insole, midsole and outsole built based on the FE model of the foot-ankle, and finally the coupled foot-boot model was generated by putting together the models of the lower limb and boot. In this study, the FE model of foot and ankle was validated during balance standing. There was a good agreement in the overall patterns of predicted and measured plantar pressure distribution published in literature. The coupled foot-boot model will be fully validated in the subsequent works under both static and dynamic loading conditions for further studies on injuries investigation in military and sports, foot wear design and characteristics of parachute landing impact in military. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Evaluation of gas radiation models in CFD modeling of oxy-combustion

    International Nuclear Information System (INIS)

    Rajhi, M.A.; Ben-Mansour, R.; Habib, M.A.; Nemitallah, M.A.; Andersson, K.

    2014-01-01

    Highlights: • CFD modeling of a typical industrial water tube boiler is conducted. • Different combustion processes were considered including air and oxy-fuel combustion. • SGG, EWBM, Leckner, Perry and WSGG radiation models were considered in the study. • EWBM is the most accurate model and it’s considered to be the benchmark model. • Characteristics of oxy-fuel combustion are compared to those of air–fuel combustion. - Abstract: Proper determination of the radiation energy is very important for proper predictions of the combustion characteristics inside combustion devices using CFD modeling. For this purpose, different gas radiation models were developed and applied in the present work. These radiation models vary in their accuracy and complexity according to the application. In this work, a CFD model for a typical industrial water tube boiler was developed, considering three different combustion environments. The combustion environments are air–fuel combustion (21% O 2 and 79% N 2 ), oxy-fuel combustion (21% O 2 and 79% CO 2 ) and oxy-fuel combustion (27% O 2 and 73% CO 2 ). Simple grey gas (SGG), exponential wide band model (EWBM), Leckner, Perry and weighted sum of grey gases (WSGG) radiation models were examined and their influences on the combustion characteristics were evaluated. Among those radiation models, the EWBM was found to provide close results to the experimental data for the present boiler combustion application. The oxy-fuel combustion characteristics were analyzed and compared with those of air–fuel combustion

  16. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    Science.gov (United States)

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  17. Microstructural modelling of nuclear graphite using multi-phase models

    International Nuclear Information System (INIS)

    Berre, C.; Fok, S.L.; Marsden, B.J.; Mummery, P.M.; Marrow, T.J.; Neighbour, G.B.

    2008-01-01

    This paper presents a new modelling technique using three-dimensional multi-phase finite element models in which meshes representing the microstructure of thermally oxidised nuclear graphite were generated from X-ray micro-tomography images. The density of the material was related to the image greyscale using Beer-Lambert's law, and multiple phases could thus be defined. The local elastic and non-linear properties of each phase were defined as a function of density and changes in Young's modulus, tensile and compressive strength with thermal oxidation were calculated. Numerical predictions compared well with experimental data and with other numerical results obtained using two-phase models. These models were found to be more representative of the actual microstructure of the scanned material than two-phase models and, possibly because of pore closure occurring during compression, compressive tests were also predicted to be less sensitive to the microstructure geometry than tensile tests

  18. AREST model description

    International Nuclear Information System (INIS)

    Engel, D.W.; McGrail, B.P.

    1993-11-01

    The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST

  19. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  20. Kantowski--Sachs cosmological models as big-bang models

    International Nuclear Information System (INIS)

    Weber, E.

    1985-01-01

    In the presence of a nonzero cosmological constant Λ, we classify the anisotropic cosmological models of the Kantowski--Sachs type by means of the quantities epsilon 2 0 , q 0 , summation 0 corresponding, respectively, to the relative root-mean-square deviation from isotropy, the deceleration parameter, and the density parameter of the perfect fluid at a given time t = t 0 . We obtain for Λ>0 a set of big-bang models of zero measure as well as a set of cosmological models of nonzero measure evolving toward the de Sitter solution